I0213 10:47:12.994177 8 e2e.go:224] Starting e2e run "30a58492-4e4e-11ea-aba9-0242ac110007" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581590831 - Will randomize all specs Will run 201 of 2164 specs Feb 13 10:47:13.250: INFO: >>> kubeConfig: /root/.kube/config Feb 13 10:47:13.253: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 13 10:47:13.271: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 13 10:47:13.307: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 13 10:47:13.307: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 13 10:47:13.307: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 13 10:47:13.316: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 13 10:47:13.316: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 13 10:47:13.316: INFO: e2e test version: v1.13.12 Feb 13 10:47:13.317: INFO: kube-apiserver version: v1.13.8 S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:47:13.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts Feb 13 10:47:13.462: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 13 10:47:35.621: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:35.621: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:35.695487 8 log.go:172] (0xc0007eac60) (0xc0015994a0) Create stream I0213 10:47:35.695544 8 log.go:172] (0xc0007eac60) (0xc0015994a0) Stream added, broadcasting: 1 I0213 10:47:35.701802 8 log.go:172] (0xc0007eac60) Reply frame received for 1 I0213 10:47:35.701848 8 log.go:172] (0xc0007eac60) (0xc00097d0e0) Create stream I0213 10:47:35.701863 8 log.go:172] (0xc0007eac60) (0xc00097d0e0) Stream added, broadcasting: 3 I0213 10:47:35.703180 8 log.go:172] (0xc0007eac60) Reply frame received for 3 I0213 10:47:35.703224 8 log.go:172] (0xc0007eac60) (0xc001599540) Create stream I0213 10:47:35.703238 8 log.go:172] (0xc0007eac60) (0xc001599540) Stream added, broadcasting: 5 I0213 10:47:35.704275 8 log.go:172] (0xc0007eac60) Reply frame received for 5 I0213 10:47:35.814757 8 log.go:172] (0xc0007eac60) Data frame received for 3 I0213 10:47:35.814804 8 log.go:172] (0xc00097d0e0) (3) Data frame handling I0213 10:47:35.814825 8 log.go:172] (0xc00097d0e0) (3) Data frame sent I0213 10:47:35.952926 8 log.go:172] (0xc0007eac60) Data frame received for 1 I0213 10:47:35.952991 8 log.go:172] (0xc0007eac60) (0xc00097d0e0) Stream removed, broadcasting: 3 I0213 10:47:35.953035 8 log.go:172] (0xc0015994a0) (1) Data frame handling I0213 10:47:35.953053 8 log.go:172] (0xc0015994a0) (1) Data frame sent I0213 10:47:35.953064 8 log.go:172] (0xc0007eac60) (0xc001599540) Stream removed, broadcasting: 5 I0213 10:47:35.953092 8 log.go:172] (0xc0007eac60) (0xc0015994a0) Stream removed, broadcasting: 1 I0213 10:47:35.953114 8 log.go:172] (0xc0007eac60) Go away received I0213 10:47:35.953605 8 log.go:172] (0xc0007eac60) (0xc0015994a0) Stream removed, broadcasting: 1 I0213 10:47:35.953635 8 log.go:172] (0xc0007eac60) (0xc00097d0e0) Stream removed, broadcasting: 3 I0213 10:47:35.953652 8 log.go:172] (0xc0007eac60) (0xc001599540) Stream removed, broadcasting: 5 Feb 13 10:47:35.953: INFO: Exec stderr: "" Feb 13 10:47:35.953: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:35.953: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:36.047402 8 log.go:172] (0xc0009c7080) (0xc001f7db80) Create stream I0213 10:47:36.048190 8 log.go:172] (0xc0009c7080) (0xc001f7db80) Stream added, broadcasting: 1 I0213 10:47:36.056101 8 log.go:172] (0xc0009c7080) Reply frame received for 1 I0213 10:47:36.056229 8 log.go:172] (0xc0009c7080) (0xc00097d180) Create stream I0213 10:47:36.056250 8 log.go:172] (0xc0009c7080) (0xc00097d180) Stream added, broadcasting: 3 I0213 10:47:36.059340 8 log.go:172] (0xc0009c7080) Reply frame received for 3 I0213 10:47:36.059372 8 log.go:172] (0xc0009c7080) (0xc00151ff40) Create stream I0213 10:47:36.059379 8 log.go:172] (0xc0009c7080) (0xc00151ff40) Stream added, broadcasting: 5 I0213 10:47:36.065072 8 log.go:172] (0xc0009c7080) Reply frame received for 5 I0213 10:47:36.233885 8 log.go:172] (0xc0009c7080) Data frame received for 3 I0213 10:47:36.233975 8 log.go:172] (0xc00097d180) (3) Data frame handling I0213 10:47:36.234054 8 log.go:172] (0xc00097d180) (3) Data frame sent I0213 10:47:36.376421 8 log.go:172] (0xc0009c7080) (0xc00097d180) Stream removed, broadcasting: 3 I0213 10:47:36.376521 8 log.go:172] (0xc0009c7080) Data frame received for 1 I0213 10:47:36.376547 8 log.go:172] (0xc0009c7080) (0xc00151ff40) Stream removed, broadcasting: 5 I0213 10:47:36.376578 8 log.go:172] (0xc001f7db80) (1) Data frame handling I0213 10:47:36.376588 8 log.go:172] (0xc001f7db80) (1) Data frame sent I0213 10:47:36.376597 8 log.go:172] (0xc0009c7080) (0xc001f7db80) Stream removed, broadcasting: 1 I0213 10:47:36.376607 8 log.go:172] (0xc0009c7080) Go away received I0213 10:47:36.376821 8 log.go:172] (0xc0009c7080) (0xc001f7db80) Stream removed, broadcasting: 1 I0213 10:47:36.376854 8 log.go:172] (0xc0009c7080) (0xc00097d180) Stream removed, broadcasting: 3 I0213 10:47:36.376866 8 log.go:172] (0xc0009c7080) (0xc00151ff40) Stream removed, broadcasting: 5 Feb 13 10:47:36.376: INFO: Exec stderr: "" Feb 13 10:47:36.376: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:36.376: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:36.467779 8 log.go:172] (0xc000934790) (0xc000cea500) Create stream I0213 10:47:36.467854 8 log.go:172] (0xc000934790) (0xc000cea500) Stream added, broadcasting: 1 I0213 10:47:36.486377 8 log.go:172] (0xc000934790) Reply frame received for 1 I0213 10:47:36.486456 8 log.go:172] (0xc000934790) (0xc001cbc000) Create stream I0213 10:47:36.486478 8 log.go:172] (0xc000934790) (0xc001cbc000) Stream added, broadcasting: 3 I0213 10:47:36.488704 8 log.go:172] (0xc000934790) Reply frame received for 3 I0213 10:47:36.488820 8 log.go:172] (0xc000934790) (0xc00151e000) Create stream I0213 10:47:36.488836 8 log.go:172] (0xc000934790) (0xc00151e000) Stream added, broadcasting: 5 I0213 10:47:36.491918 8 log.go:172] (0xc000934790) Reply frame received for 5 I0213 10:47:36.929495 8 log.go:172] (0xc000934790) Data frame received for 3 I0213 10:47:36.929561 8 log.go:172] (0xc001cbc000) (3) Data frame handling I0213 10:47:36.929579 8 log.go:172] (0xc001cbc000) (3) Data frame sent I0213 10:47:37.031929 8 log.go:172] (0xc000934790) Data frame received for 1 I0213 10:47:37.031998 8 log.go:172] (0xc000934790) (0xc00151e000) Stream removed, broadcasting: 5 I0213 10:47:37.032034 8 log.go:172] (0xc000cea500) (1) Data frame handling I0213 10:47:37.032044 8 log.go:172] (0xc000cea500) (1) Data frame sent I0213 10:47:37.032058 8 log.go:172] (0xc000934790) (0xc001cbc000) Stream removed, broadcasting: 3 I0213 10:47:37.032070 8 log.go:172] (0xc000934790) (0xc000cea500) Stream removed, broadcasting: 1 I0213 10:47:37.032080 8 log.go:172] (0xc000934790) Go away received I0213 10:47:37.032411 8 log.go:172] (0xc000934790) (0xc000cea500) Stream removed, broadcasting: 1 I0213 10:47:37.032420 8 log.go:172] (0xc000934790) (0xc001cbc000) Stream removed, broadcasting: 3 I0213 10:47:37.032428 8 log.go:172] (0xc000934790) (0xc00151e000) Stream removed, broadcasting: 5 Feb 13 10:47:37.032: INFO: Exec stderr: "" Feb 13 10:47:37.032: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:37.032: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:37.099747 8 log.go:172] (0xc0007ea580) (0xc00151e1e0) Create stream I0213 10:47:37.099801 8 log.go:172] (0xc0007ea580) (0xc00151e1e0) Stream added, broadcasting: 1 I0213 10:47:37.103454 8 log.go:172] (0xc0007ea580) Reply frame received for 1 I0213 10:47:37.103497 8 log.go:172] (0xc0007ea580) (0xc000e00000) Create stream I0213 10:47:37.103516 8 log.go:172] (0xc0007ea580) (0xc000e00000) Stream added, broadcasting: 3 I0213 10:47:37.105450 8 log.go:172] (0xc0007ea580) Reply frame received for 3 I0213 10:47:37.105481 8 log.go:172] (0xc0007ea580) (0xc000e000a0) Create stream I0213 10:47:37.105493 8 log.go:172] (0xc0007ea580) (0xc000e000a0) Stream added, broadcasting: 5 I0213 10:47:37.106615 8 log.go:172] (0xc0007ea580) Reply frame received for 5 I0213 10:47:37.252882 8 log.go:172] (0xc0007ea580) Data frame received for 3 I0213 10:47:37.252914 8 log.go:172] (0xc000e00000) (3) Data frame handling I0213 10:47:37.252941 8 log.go:172] (0xc000e00000) (3) Data frame sent I0213 10:47:37.488887 8 log.go:172] (0xc0007ea580) (0xc000e00000) Stream removed, broadcasting: 3 I0213 10:47:37.489021 8 log.go:172] (0xc0007ea580) Data frame received for 1 I0213 10:47:37.489050 8 log.go:172] (0xc00151e1e0) (1) Data frame handling I0213 10:47:37.489074 8 log.go:172] (0xc00151e1e0) (1) Data frame sent I0213 10:47:37.489100 8 log.go:172] (0xc0007ea580) (0xc000e000a0) Stream removed, broadcasting: 5 I0213 10:47:37.489144 8 log.go:172] (0xc0007ea580) (0xc00151e1e0) Stream removed, broadcasting: 1 I0213 10:47:37.489157 8 log.go:172] (0xc0007ea580) Go away received I0213 10:47:37.489565 8 log.go:172] (0xc0007ea580) (0xc00151e1e0) Stream removed, broadcasting: 1 I0213 10:47:37.489583 8 log.go:172] (0xc0007ea580) (0xc000e00000) Stream removed, broadcasting: 3 I0213 10:47:37.489593 8 log.go:172] (0xc0007ea580) (0xc000e000a0) Stream removed, broadcasting: 5 Feb 13 10:47:37.489: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 13 10:47:37.489: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:37.489: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:37.560126 8 log.go:172] (0xc0009c6c60) (0xc00097c460) Create stream I0213 10:47:37.560230 8 log.go:172] (0xc0009c6c60) (0xc00097c460) Stream added, broadcasting: 1 I0213 10:47:37.566213 8 log.go:172] (0xc0009c6c60) Reply frame received for 1 I0213 10:47:37.566305 8 log.go:172] (0xc0009c6c60) (0xc000e00140) Create stream I0213 10:47:37.566349 8 log.go:172] (0xc0009c6c60) (0xc000e00140) Stream added, broadcasting: 3 I0213 10:47:37.567957 8 log.go:172] (0xc0009c6c60) Reply frame received for 3 I0213 10:47:37.568003 8 log.go:172] (0xc0009c6c60) (0xc00097c500) Create stream I0213 10:47:37.568020 8 log.go:172] (0xc0009c6c60) (0xc00097c500) Stream added, broadcasting: 5 I0213 10:47:37.569020 8 log.go:172] (0xc0009c6c60) Reply frame received for 5 I0213 10:47:37.694364 8 log.go:172] (0xc0009c6c60) Data frame received for 3 I0213 10:47:37.694524 8 log.go:172] (0xc000e00140) (3) Data frame handling I0213 10:47:37.694571 8 log.go:172] (0xc000e00140) (3) Data frame sent I0213 10:47:37.811254 8 log.go:172] (0xc0009c6c60) (0xc000e00140) Stream removed, broadcasting: 3 I0213 10:47:37.811382 8 log.go:172] (0xc0009c6c60) Data frame received for 1 I0213 10:47:37.811427 8 log.go:172] (0xc0009c6c60) (0xc00097c500) Stream removed, broadcasting: 5 I0213 10:47:37.811495 8 log.go:172] (0xc00097c460) (1) Data frame handling I0213 10:47:37.811517 8 log.go:172] (0xc00097c460) (1) Data frame sent I0213 10:47:37.811561 8 log.go:172] (0xc0009c6c60) (0xc00097c460) Stream removed, broadcasting: 1 I0213 10:47:37.811575 8 log.go:172] (0xc0009c6c60) Go away received I0213 10:47:37.811840 8 log.go:172] (0xc0009c6c60) (0xc00097c460) Stream removed, broadcasting: 1 I0213 10:47:37.811868 8 log.go:172] (0xc0009c6c60) (0xc000e00140) Stream removed, broadcasting: 3 I0213 10:47:37.811889 8 log.go:172] (0xc0009c6c60) (0xc00097c500) Stream removed, broadcasting: 5 Feb 13 10:47:37.811: INFO: Exec stderr: "" Feb 13 10:47:37.812: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:37.812: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:37.878689 8 log.go:172] (0xc0014c6630) (0xc001cbc3c0) Create stream I0213 10:47:37.878762 8 log.go:172] (0xc0014c6630) (0xc001cbc3c0) Stream added, broadcasting: 1 I0213 10:47:37.883906 8 log.go:172] (0xc0014c6630) Reply frame received for 1 I0213 10:47:37.883946 8 log.go:172] (0xc0014c6630) (0xc0015980a0) Create stream I0213 10:47:37.883959 8 log.go:172] (0xc0014c6630) (0xc0015980a0) Stream added, broadcasting: 3 I0213 10:47:37.885852 8 log.go:172] (0xc0014c6630) Reply frame received for 3 I0213 10:47:37.885895 8 log.go:172] (0xc0014c6630) (0xc00151e280) Create stream I0213 10:47:37.885910 8 log.go:172] (0xc0014c6630) (0xc00151e280) Stream added, broadcasting: 5 I0213 10:47:37.888335 8 log.go:172] (0xc0014c6630) Reply frame received for 5 I0213 10:47:37.996144 8 log.go:172] (0xc0014c6630) Data frame received for 3 I0213 10:47:37.996205 8 log.go:172] (0xc0015980a0) (3) Data frame handling I0213 10:47:37.996218 8 log.go:172] (0xc0015980a0) (3) Data frame sent I0213 10:47:38.095898 8 log.go:172] (0xc0014c6630) Data frame received for 1 I0213 10:47:38.095953 8 log.go:172] (0xc0014c6630) (0xc0015980a0) Stream removed, broadcasting: 3 I0213 10:47:38.095995 8 log.go:172] (0xc001cbc3c0) (1) Data frame handling I0213 10:47:38.096023 8 log.go:172] (0xc001cbc3c0) (1) Data frame sent I0213 10:47:38.096054 8 log.go:172] (0xc0014c6630) (0xc00151e280) Stream removed, broadcasting: 5 I0213 10:47:38.096101 8 log.go:172] (0xc0014c6630) (0xc001cbc3c0) Stream removed, broadcasting: 1 I0213 10:47:38.096121 8 log.go:172] (0xc0014c6630) Go away received I0213 10:47:38.096694 8 log.go:172] (0xc0014c6630) (0xc001cbc3c0) Stream removed, broadcasting: 1 I0213 10:47:38.096819 8 log.go:172] (0xc0014c6630) (0xc0015980a0) Stream removed, broadcasting: 3 I0213 10:47:38.096832 8 log.go:172] (0xc0014c6630) (0xc00151e280) Stream removed, broadcasting: 5 Feb 13 10:47:38.096: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 13 10:47:38.096: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:38.096: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:38.175820 8 log.go:172] (0xc0014c6b00) (0xc001cbc640) Create stream I0213 10:47:38.175886 8 log.go:172] (0xc0014c6b00) (0xc001cbc640) Stream added, broadcasting: 1 I0213 10:47:38.179231 8 log.go:172] (0xc0014c6b00) Reply frame received for 1 I0213 10:47:38.179285 8 log.go:172] (0xc0014c6b00) (0xc00151e320) Create stream I0213 10:47:38.179302 8 log.go:172] (0xc0014c6b00) (0xc00151e320) Stream added, broadcasting: 3 I0213 10:47:38.180138 8 log.go:172] (0xc0014c6b00) Reply frame received for 3 I0213 10:47:38.180169 8 log.go:172] (0xc0014c6b00) (0xc00151e3c0) Create stream I0213 10:47:38.180182 8 log.go:172] (0xc0014c6b00) (0xc00151e3c0) Stream added, broadcasting: 5 I0213 10:47:38.180939 8 log.go:172] (0xc0014c6b00) Reply frame received for 5 I0213 10:47:38.274831 8 log.go:172] (0xc0014c6b00) Data frame received for 3 I0213 10:47:38.274876 8 log.go:172] (0xc00151e320) (3) Data frame handling I0213 10:47:38.274919 8 log.go:172] (0xc00151e320) (3) Data frame sent I0213 10:47:38.373384 8 log.go:172] (0xc0014c6b00) Data frame received for 1 I0213 10:47:38.373458 8 log.go:172] (0xc0014c6b00) (0xc00151e320) Stream removed, broadcasting: 3 I0213 10:47:38.373522 8 log.go:172] (0xc001cbc640) (1) Data frame handling I0213 10:47:38.373554 8 log.go:172] (0xc001cbc640) (1) Data frame sent I0213 10:47:38.373583 8 log.go:172] (0xc0014c6b00) (0xc00151e3c0) Stream removed, broadcasting: 5 I0213 10:47:38.373633 8 log.go:172] (0xc0014c6b00) (0xc001cbc640) Stream removed, broadcasting: 1 I0213 10:47:38.373670 8 log.go:172] (0xc0014c6b00) Go away received I0213 10:47:38.373824 8 log.go:172] (0xc0014c6b00) (0xc001cbc640) Stream removed, broadcasting: 1 I0213 10:47:38.373842 8 log.go:172] (0xc0014c6b00) (0xc00151e320) Stream removed, broadcasting: 3 I0213 10:47:38.373858 8 log.go:172] (0xc0014c6b00) (0xc00151e3c0) Stream removed, broadcasting: 5 Feb 13 10:47:38.373: INFO: Exec stderr: "" Feb 13 10:47:38.373: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:38.374: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:38.448475 8 log.go:172] (0xc000934630) (0xc000e003c0) Create stream I0213 10:47:38.448559 8 log.go:172] (0xc000934630) (0xc000e003c0) Stream added, broadcasting: 1 I0213 10:47:38.457212 8 log.go:172] (0xc000934630) Reply frame received for 1 I0213 10:47:38.457256 8 log.go:172] (0xc000934630) (0xc00151e460) Create stream I0213 10:47:38.457276 8 log.go:172] (0xc000934630) (0xc00151e460) Stream added, broadcasting: 3 I0213 10:47:38.459533 8 log.go:172] (0xc000934630) Reply frame received for 3 I0213 10:47:38.459579 8 log.go:172] (0xc000934630) (0xc000e00460) Create stream I0213 10:47:38.459602 8 log.go:172] (0xc000934630) (0xc000e00460) Stream added, broadcasting: 5 I0213 10:47:38.461427 8 log.go:172] (0xc000934630) Reply frame received for 5 I0213 10:47:38.685256 8 log.go:172] (0xc000934630) Data frame received for 3 I0213 10:47:38.685330 8 log.go:172] (0xc00151e460) (3) Data frame handling I0213 10:47:38.685353 8 log.go:172] (0xc00151e460) (3) Data frame sent I0213 10:47:38.816016 8 log.go:172] (0xc000934630) Data frame received for 1 I0213 10:47:38.816068 8 log.go:172] (0xc000934630) (0xc00151e460) Stream removed, broadcasting: 3 I0213 10:47:38.816111 8 log.go:172] (0xc000e003c0) (1) Data frame handling I0213 10:47:38.816123 8 log.go:172] (0xc000e003c0) (1) Data frame sent I0213 10:47:38.816131 8 log.go:172] (0xc000934630) (0xc000e003c0) Stream removed, broadcasting: 1 I0213 10:47:38.816254 8 log.go:172] (0xc000934630) (0xc000e00460) Stream removed, broadcasting: 5 I0213 10:47:38.816303 8 log.go:172] (0xc000934630) (0xc000e003c0) Stream removed, broadcasting: 1 I0213 10:47:38.816316 8 log.go:172] (0xc000934630) (0xc00151e460) Stream removed, broadcasting: 3 I0213 10:47:38.816335 8 log.go:172] (0xc000934630) (0xc000e00460) Stream removed, broadcasting: 5 Feb 13 10:47:38.816: INFO: Exec stderr: "" I0213 10:47:38.816434 8 log.go:172] (0xc000934630) Go away received Feb 13 10:47:38.816: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:38.816: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:38.933963 8 log.go:172] (0xc0016d42c0) (0xc001598320) Create stream I0213 10:47:38.934030 8 log.go:172] (0xc0016d42c0) (0xc001598320) Stream added, broadcasting: 1 I0213 10:47:38.938234 8 log.go:172] (0xc0016d42c0) Reply frame received for 1 I0213 10:47:38.938263 8 log.go:172] (0xc0016d42c0) (0xc001cbc6e0) Create stream I0213 10:47:38.938272 8 log.go:172] (0xc0016d42c0) (0xc001cbc6e0) Stream added, broadcasting: 3 I0213 10:47:38.939819 8 log.go:172] (0xc0016d42c0) Reply frame received for 3 I0213 10:47:38.939850 8 log.go:172] (0xc0016d42c0) (0xc001cbc780) Create stream I0213 10:47:38.939859 8 log.go:172] (0xc0016d42c0) (0xc001cbc780) Stream added, broadcasting: 5 I0213 10:47:38.941193 8 log.go:172] (0xc0016d42c0) Reply frame received for 5 I0213 10:47:39.046180 8 log.go:172] (0xc0016d42c0) Data frame received for 3 I0213 10:47:39.046253 8 log.go:172] (0xc001cbc6e0) (3) Data frame handling I0213 10:47:39.046270 8 log.go:172] (0xc001cbc6e0) (3) Data frame sent I0213 10:47:39.184819 8 log.go:172] (0xc0016d42c0) (0xc001cbc6e0) Stream removed, broadcasting: 3 I0213 10:47:39.185080 8 log.go:172] (0xc0016d42c0) Data frame received for 1 I0213 10:47:39.185156 8 log.go:172] (0xc0016d42c0) (0xc001cbc780) Stream removed, broadcasting: 5 I0213 10:47:39.185394 8 log.go:172] (0xc001598320) (1) Data frame handling I0213 10:47:39.185425 8 log.go:172] (0xc001598320) (1) Data frame sent I0213 10:47:39.185442 8 log.go:172] (0xc0016d42c0) (0xc001598320) Stream removed, broadcasting: 1 I0213 10:47:39.185462 8 log.go:172] (0xc0016d42c0) Go away received I0213 10:47:39.185717 8 log.go:172] (0xc0016d42c0) (0xc001598320) Stream removed, broadcasting: 1 I0213 10:47:39.185734 8 log.go:172] (0xc0016d42c0) (0xc001cbc6e0) Stream removed, broadcasting: 3 I0213 10:47:39.185745 8 log.go:172] (0xc0016d42c0) (0xc001cbc780) Stream removed, broadcasting: 5 Feb 13 10:47:39.185: INFO: Exec stderr: "" Feb 13 10:47:39.185: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-zvvkk PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:47:39.185: INFO: >>> kubeConfig: /root/.kube/config I0213 10:47:39.278688 8 log.go:172] (0xc000934bb0) (0xc000e006e0) Create stream I0213 10:47:39.278732 8 log.go:172] (0xc000934bb0) (0xc000e006e0) Stream added, broadcasting: 1 I0213 10:47:39.281857 8 log.go:172] (0xc000934bb0) Reply frame received for 1 I0213 10:47:39.281896 8 log.go:172] (0xc000934bb0) (0xc001cbc820) Create stream I0213 10:47:39.281910 8 log.go:172] (0xc000934bb0) (0xc001cbc820) Stream added, broadcasting: 3 I0213 10:47:39.284878 8 log.go:172] (0xc000934bb0) Reply frame received for 3 I0213 10:47:39.284893 8 log.go:172] (0xc000934bb0) (0xc00151e5a0) Create stream I0213 10:47:39.284901 8 log.go:172] (0xc000934bb0) (0xc00151e5a0) Stream added, broadcasting: 5 I0213 10:47:39.285856 8 log.go:172] (0xc000934bb0) Reply frame received for 5 I0213 10:47:39.411208 8 log.go:172] (0xc000934bb0) Data frame received for 3 I0213 10:47:39.411268 8 log.go:172] (0xc001cbc820) (3) Data frame handling I0213 10:47:39.411294 8 log.go:172] (0xc001cbc820) (3) Data frame sent I0213 10:47:39.544054 8 log.go:172] (0xc000934bb0) Data frame received for 1 I0213 10:47:39.544149 8 log.go:172] (0xc000934bb0) (0xc001cbc820) Stream removed, broadcasting: 3 I0213 10:47:39.544196 8 log.go:172] (0xc000e006e0) (1) Data frame handling I0213 10:47:39.544217 8 log.go:172] (0xc000e006e0) (1) Data frame sent I0213 10:47:39.544229 8 log.go:172] (0xc000934bb0) (0xc000e006e0) Stream removed, broadcasting: 1 I0213 10:47:39.544242 8 log.go:172] (0xc000934bb0) (0xc00151e5a0) Stream removed, broadcasting: 5 I0213 10:47:39.544335 8 log.go:172] (0xc000934bb0) Go away received I0213 10:47:39.544578 8 log.go:172] (0xc000934bb0) (0xc000e006e0) Stream removed, broadcasting: 1 I0213 10:47:39.544602 8 log.go:172] (0xc000934bb0) (0xc001cbc820) Stream removed, broadcasting: 3 I0213 10:47:39.544619 8 log.go:172] (0xc000934bb0) (0xc00151e5a0) Stream removed, broadcasting: 5 Feb 13 10:47:39.544: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:47:39.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-zvvkk" for this suite. Feb 13 10:48:37.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:48:37.721: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-zvvkk, resource: bindings, ignored listing per whitelist Feb 13 10:48:37.902: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-zvvkk deletion completed in 58.34064094s • [SLOW TEST:84.585 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:48:37.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gv5pz STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 13 10:48:38.060: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 13 10:49:14.384: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-gv5pz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 10:49:14.385: INFO: >>> kubeConfig: /root/.kube/config I0213 10:49:14.501854 8 log.go:172] (0xc0009c6c60) (0xc000e00280) Create stream I0213 10:49:14.502040 8 log.go:172] (0xc0009c6c60) (0xc000e00280) Stream added, broadcasting: 1 I0213 10:49:14.512890 8 log.go:172] (0xc0009c6c60) Reply frame received for 1 I0213 10:49:14.512947 8 log.go:172] (0xc0009c6c60) (0xc00027e320) Create stream I0213 10:49:14.512971 8 log.go:172] (0xc0009c6c60) (0xc00027e320) Stream added, broadcasting: 3 I0213 10:49:14.518377 8 log.go:172] (0xc0009c6c60) Reply frame received for 3 I0213 10:49:14.518497 8 log.go:172] (0xc0009c6c60) (0xc000e00320) Create stream I0213 10:49:14.518533 8 log.go:172] (0xc0009c6c60) (0xc000e00320) Stream added, broadcasting: 5 I0213 10:49:14.520687 8 log.go:172] (0xc0009c6c60) Reply frame received for 5 I0213 10:49:14.789150 8 log.go:172] (0xc0009c6c60) Data frame received for 3 I0213 10:49:14.789267 8 log.go:172] (0xc00027e320) (3) Data frame handling I0213 10:49:14.789397 8 log.go:172] (0xc00027e320) (3) Data frame sent I0213 10:49:14.959032 8 log.go:172] (0xc0009c6c60) (0xc00027e320) Stream removed, broadcasting: 3 I0213 10:49:14.959258 8 log.go:172] (0xc0009c6c60) (0xc000e00320) Stream removed, broadcasting: 5 I0213 10:49:14.959332 8 log.go:172] (0xc0009c6c60) Data frame received for 1 I0213 10:49:14.959369 8 log.go:172] (0xc000e00280) (1) Data frame handling I0213 10:49:14.959388 8 log.go:172] (0xc000e00280) (1) Data frame sent I0213 10:49:14.959399 8 log.go:172] (0xc0009c6c60) (0xc000e00280) Stream removed, broadcasting: 1 I0213 10:49:14.959436 8 log.go:172] (0xc0009c6c60) Go away received I0213 10:49:14.959726 8 log.go:172] (0xc0009c6c60) (0xc000e00280) Stream removed, broadcasting: 1 I0213 10:49:14.959735 8 log.go:172] (0xc0009c6c60) (0xc00027e320) Stream removed, broadcasting: 3 I0213 10:49:14.959747 8 log.go:172] (0xc0009c6c60) (0xc000e00320) Stream removed, broadcasting: 5 Feb 13 10:49:14.959: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:49:14.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gv5pz" for this suite. Feb 13 10:49:39.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:49:39.131: INFO: namespace: e2e-tests-pod-network-test-gv5pz, resource: bindings, ignored listing per whitelist Feb 13 10:49:39.208: INFO: namespace e2e-tests-pod-network-test-gv5pz deletion completed in 24.229729991s • [SLOW TEST:61.306 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:49:39.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 10:49:39.413: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:49:50.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-c66sf" for this suite. Feb 13 10:50:32.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:50:32.282: INFO: namespace: e2e-tests-pods-c66sf, resource: bindings, ignored listing per whitelist Feb 13 10:50:32.376: INFO: namespace e2e-tests-pods-c66sf deletion completed in 42.254807896s • [SLOW TEST:53.168 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:50:32.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 10:50:32.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zrsgp' Feb 13 10:50:34.905: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 10:50:34.906: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 13 10:50:37.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-zrsgp' Feb 13 10:50:37.487: INFO: stderr: "" Feb 13 10:50:37.487: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:50:37.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zrsgp" for this suite. Feb 13 10:50:43.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:50:44.021: INFO: namespace: e2e-tests-kubectl-zrsgp, resource: bindings, ignored listing per whitelist Feb 13 10:50:44.170: INFO: namespace e2e-tests-kubectl-zrsgp deletion completed in 6.652362818s • [SLOW TEST:11.794 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:50:44.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-container-probe-m4ntw". STEP: Found 5 events. Feb 13 10:51:44.591: INFO: At 2020-02-13 10:50:44 +0000 UTC - event for test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007: {default-scheduler } Scheduled: Successfully assigned e2e-tests-container-probe-m4ntw/test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007 to hunter-server-hu5at5svl7ps Feb 13 10:51:44.591: INFO: At 2020-02-13 10:50:49 +0000 UTC - event for test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007: {kubelet hunter-server-hu5at5svl7ps} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/test-webserver:1.0" already present on machine Feb 13 10:51:44.592: INFO: At 2020-02-13 10:50:52 +0000 UTC - event for test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007: {kubelet hunter-server-hu5at5svl7ps} Created: Created container Feb 13 10:51:44.592: INFO: At 2020-02-13 10:50:52 +0000 UTC - event for test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007: {kubelet hunter-server-hu5at5svl7ps} Started: Started container Feb 13 10:51:44.592: INFO: At 2020-02-13 10:51:00 +0000 UTC - event for test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007: {kubelet hunter-server-hu5at5svl7ps} Unhealthy: Readiness probe failed: Get http://10.32.0.4:81/: dial tcp 10.32.0.4:81: connect: connection refused Feb 13 10:51:44.705: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 10:51:44.706: INFO: test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 10:50:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 10:50:44 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 10:50:44 +0000 UTC ContainersNotReady containers with unready status: [test-webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 10:50:44 +0000 UTC }] Feb 13 10:51:44.706: INFO: coredns-54ff9cd656-79kxx hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC }] Feb 13 10:51:44.706: INFO: coredns-54ff9cd656-bmkk4 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:46 +0000 UTC }] Feb 13 10:51:44.706: INFO: etcd-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 13 10:51:44.706: INFO: kube-apiserver-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 13 10:51:44.706: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-02 12:18:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 13 10:51:44.706: INFO: kube-proxy-bqnnz hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:22 +0000 UTC }] Feb 13 10:51:44.706: INFO: kube-scheduler-hunter-server-hu5at5svl7ps hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 05:36:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 05:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:32:42 +0000 UTC }] Feb 13 10:51:44.706: INFO: weave-net-tqwf2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 11:11:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-08-04 08:33:23 +0000 UTC }] Feb 13 10:51:44.706: INFO: Feb 13 10:51:44.731: INFO: Logging node info for node hunter-server-hu5at5svl7ps Feb 13 10:51:44.743: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-server-hu5at5svl7ps,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-server-hu5at5svl7ps,UID:79f3887d-b692-11e9-a994-fa163e34d433,ResourceVersion:21520167,Generation:0,CreationTimestamp:2019-08-04 08:33:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-server-hu5at5svl7ps,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:33:41 +0000 UTC 2019-08-04 08:33:41 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-13 10:51:44 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-13 10:51:44 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-13 10:51:44 +0000 UTC 2019-08-04 08:32:55 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-13 10:51:44 +0000 UTC 2019-08-04 08:33:44 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.1.240} {Hostname hunter-server-hu5at5svl7ps}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09742db8afaa4010be44cec974ef8dd2,SystemUUID:09742DB8-AFAA-4010-BE44-CEC974EF8DD2,BootID:e5092afb-2b29-4458-9662-9eee6c0a1f90,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.13.8,KubeProxyVersion:v1.13.8,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24] 219655340} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[k8s.gcr.io/kube-apiserver@sha256:782fb3e5e34a3025e5c2fc92d5a73fc5eb5223fbd1760a551f2d02e1b484c899 k8s.gcr.io/kube-apiserver:v1.13.8] 181093118} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[k8s.gcr.io/kube-controller-manager@sha256:46889a90fff5324ad813c1024d0b7713a5529117570e3611657a0acfb58c8f43 k8s.gcr.io/kube-controller-manager:v1.13.8] 146353566} {[nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717] 126698067} {[nginx@sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f nginx:latest] 126698063} {[nginx@sha256:662b1a542362596b094b0b3fa30a8528445b75aed9f2d009f72401a0f8870c1f nginx@sha256:9916837e6b165e967e2beb5a586b1c980084d08eb3b3d7f79178a0c79426d880] 126346569} {[nginx@sha256:8aa7f6a9585d908a63e5e418dc5d14ae7467d2e36e1ab4f0d8f9d059a3d071ce] 126324348} {[nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2] 126323778} {[nginx@sha256:50cf965a6e08ec5784009d0fccb380fc479826b6e0e65684d9879170a9df8566 nginx@sha256:73113849b52b099e447eabb83a2722635562edc798f5b86bdf853faa0a49ec70] 126323486} {[nginx@sha256:922c815aa4df050d4df476e92daed4231f466acc8ee90e0e774951b0fd7195a4] 126215561} {[nginx@sha256:77ebc94e0cec30b20f9056bac1066b09fbdc049401b71850922c63fc0cc1762e] 125993293} {[nginx@sha256:9688d0dae8812dd2437947b756393eb0779487e361aa2ffbc3a529dca61f102c] 125976833} {[nginx@sha256:aeded0f2a861747f43a01cf1018cf9efe2bdd02afd57d2b11fcc7fcadc16ccd1] 125972845} {[nginx@sha256:1a8935aae56694cee3090d39df51b4e7fcbfe6877df24a4c5c0782dfeccc97e1 nginx@sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9 nginx@sha256:a8517b1d89209c88eeb48709bc06d706c261062813720a352a8e4f8d96635d9d] 125958368} {[nginx@sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41] 125850912} {[nginx@sha256:eb3320e2f9ca409b7c0aa71aea3cf7ce7d018f03a372564dbdb023646958770b] 125850346} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:c27502f9ab958f59f95bda6a4ffd266e3ca42a75aae641db4aac7e93dd383b6e k8s.gcr.io/kube-proxy:v1.13.8] 80245404} {[k8s.gcr.io/kube-scheduler@sha256:fdcc2d056ba5937f66301b9071b2c322fad53254e6ddf277592d99f267e5745f k8s.gcr.io/kube-scheduler:v1.13.8] 79601406} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Feb 13 10:51:44.744: INFO: Logging kubelet events for node hunter-server-hu5at5svl7ps Feb 13 10:51:44.763: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps Feb 13 10:51:44.875: INFO: etcd-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 13 10:51:44.875: INFO: weave-net-tqwf2 started at 2019-08-04 08:33:23 +0000 UTC (0+2 container statuses recorded) Feb 13 10:51:44.875: INFO: Container weave ready: true, restart count 0 Feb 13 10:51:44.875: INFO: Container weave-npc ready: true, restart count 0 Feb 13 10:51:44.875: INFO: test-webserver-af56a31a-4e4e-11ea-aba9-0242ac110007 started at 2020-02-13 10:50:44 +0000 UTC (0+1 container statuses recorded) Feb 13 10:51:44.875: INFO: Container test-webserver ready: false, restart count 1 Feb 13 10:51:44.875: INFO: coredns-54ff9cd656-bmkk4 started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded) Feb 13 10:51:44.875: INFO: Container coredns ready: true, restart count 0 Feb 13 10:51:44.875: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 13 10:51:44.875: INFO: kube-apiserver-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 13 10:51:44.875: INFO: kube-scheduler-hunter-server-hu5at5svl7ps started at (0+0 container statuses recorded) Feb 13 10:51:44.875: INFO: coredns-54ff9cd656-79kxx started at 2019-08-04 08:33:46 +0000 UTC (0+1 container statuses recorded) Feb 13 10:51:44.875: INFO: Container coredns ready: true, restart count 0 Feb 13 10:51:44.875: INFO: kube-proxy-bqnnz started at 2019-08-04 08:33:23 +0000 UTC (0+1 container statuses recorded) Feb 13 10:51:44.875: INFO: Container kube-proxy ready: true, restart count 0 W0213 10:51:44.900157 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 10:51:45.027: INFO: Latency metrics for node hunter-server-hu5at5svl7ps Feb 13 10:51:45.027: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:33.773167s} Feb 13 10:51:45.027: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:28.985065s} Feb 13 10:51:45.027: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:28.210522s} Feb 13 10:51:45.027: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:12.023762s} Feb 13 10:51:45.027: INFO: {Operation:create_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:10.299678s} Feb 13 10:51:45.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-m4ntw" for this suite. Feb 13 10:52:09.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:52:09.189: INFO: namespace: e2e-tests-container-probe-m4ntw, resource: bindings, ignored listing per whitelist Feb 13 10:52:09.263: INFO: namespace e2e-tests-container-probe-m4ntw deletion completed in 24.220422036s • Failure [85.092 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 pod should have a restart count of 0 but got 1 Expected : false to be true /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:107 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:52:09.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 13 10:52:09.475: INFO: Waiting up to 5m0s for pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-vjcmm" to be "success or failure" Feb 13 10:52:09.492: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.356989ms Feb 13 10:52:11.511: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036253712s Feb 13 10:52:13.537: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062528573s Feb 13 10:52:15.568: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093194498s Feb 13 10:52:17.758: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.283603763s Feb 13 10:52:19.787: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.312602547s STEP: Saw pod success Feb 13 10:52:19.788: INFO: Pod "pod-e204987b-4e4e-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 10:52:19.804: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e204987b-4e4e-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 10:52:20.543: INFO: Waiting for pod pod-e204987b-4e4e-11ea-aba9-0242ac110007 to disappear Feb 13 10:52:20.572: INFO: Pod pod-e204987b-4e4e-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:52:20.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vjcmm" for this suite. Feb 13 10:52:26.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:52:26.908: INFO: namespace: e2e-tests-emptydir-vjcmm, resource: bindings, ignored listing per whitelist Feb 13 10:52:27.028: INFO: namespace e2e-tests-emptydir-vjcmm deletion completed in 6.428326279s • [SLOW TEST:17.765 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:52:27.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:52:27.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zt97q" for this suite. Feb 13 10:52:51.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:52:51.568: INFO: namespace: e2e-tests-pods-zt97q, resource: bindings, ignored listing per whitelist Feb 13 10:52:51.611: INFO: namespace e2e-tests-pods-zt97q deletion completed in 24.310982569s • [SLOW TEST:24.582 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:52:51.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-fb4c84ae-4e4e-11ea-aba9-0242ac110007 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:53:06.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lv9vm" for this suite. Feb 13 10:53:30.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:53:30.536: INFO: namespace: e2e-tests-configmap-lv9vm, resource: bindings, ignored listing per whitelist Feb 13 10:53:30.661: INFO: namespace e2e-tests-configmap-lv9vm deletion completed in 24.277915027s • [SLOW TEST:39.050 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:53:30.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 10:53:30.895: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 13 10:53:36.635: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 13 10:53:42.917: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 13 10:53:43.014: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-6nmfx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6nmfx/deployments/test-cleanup-deployment,UID:19bad5a9-4e4f-11ea-a994-fa163e34d433,ResourceVersion:21520411,Generation:1,CreationTimestamp:2020-02-13 10:53:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 13 10:53:43.017: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:53:43.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6nmfx" for this suite. Feb 13 10:53:55.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:53:56.035: INFO: namespace: e2e-tests-deployment-6nmfx, resource: bindings, ignored listing per whitelist Feb 13 10:53:56.132: INFO: namespace e2e-tests-deployment-6nmfx deletion completed in 13.057049175s • [SLOW TEST:25.471 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:53:56.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 13 10:54:16.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:16.840: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:18.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:19.522: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:20.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:21.544: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:22.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:22.929: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:24.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:24.851: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:26.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:26.857: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:28.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:28.871: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:30.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:30.874: INFO: Pod pod-with-poststart-http-hook still exists Feb 13 10:54:32.840: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 13 10:54:32.859: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:54:32.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vptrg" for this suite. Feb 13 10:55:04.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:55:05.044: INFO: namespace: e2e-tests-container-lifecycle-hook-vptrg, resource: bindings, ignored listing per whitelist Feb 13 10:55:05.085: INFO: namespace e2e-tests-container-lifecycle-hook-vptrg deletion completed in 32.215484331s • [SLOW TEST:68.952 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:55:05.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 13 10:55:05.389: INFO: Waiting up to 5m0s for pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007" in namespace "e2e-tests-containers-jlgpz" to be "success or failure" Feb 13 10:55:05.538: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 148.538223ms Feb 13 10:55:07.562: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172711411s Feb 13 10:55:09.574: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185123582s Feb 13 10:55:12.026: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636541916s Feb 13 10:55:14.056: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.667130666s Feb 13 10:55:16.068: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.678563158s STEP: Saw pod success Feb 13 10:55:16.068: INFO: Pod "client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 10:55:16.072: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 10:55:16.683: INFO: Waiting for pod client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007 to disappear Feb 13 10:55:16.758: INFO: Pod client-containers-4adefda4-4e4f-11ea-aba9-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:55:16.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-jlgpz" for this suite. Feb 13 10:55:22.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:55:22.949: INFO: namespace: e2e-tests-containers-jlgpz, resource: bindings, ignored listing per whitelist Feb 13 10:55:22.999: INFO: namespace e2e-tests-containers-jlgpz deletion completed in 6.220092413s • [SLOW TEST:17.914 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:55:23.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vts66 Feb 13 10:55:33.314: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vts66 STEP: checking the pod's current state and verifying that restartCount is present Feb 13 10:55:33.330: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:59:35.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vts66" for this suite. Feb 13 10:59:41.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:59:41.440: INFO: namespace: e2e-tests-container-probe-vts66, resource: bindings, ignored listing per whitelist Feb 13 10:59:41.523: INFO: namespace e2e-tests-container-probe-vts66 deletion completed in 6.250226037s • [SLOW TEST:258.523 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:59:41.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 13 10:59:41.868: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sgqks,SelfLink:/api/v1/namespaces/e2e-tests-watch-sgqks/configmaps/e2e-watch-test-watch-closed,UID:efa5d804-4e4f-11ea-a994-fa163e34d433,ResourceVersion:21520955,Generation:0,CreationTimestamp:2020-02-13 10:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 13 10:59:41.869: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sgqks,SelfLink:/api/v1/namespaces/e2e-tests-watch-sgqks/configmaps/e2e-watch-test-watch-closed,UID:efa5d804-4e4f-11ea-a994-fa163e34d433,ResourceVersion:21520956,Generation:0,CreationTimestamp:2020-02-13 10:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 13 10:59:41.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sgqks,SelfLink:/api/v1/namespaces/e2e-tests-watch-sgqks/configmaps/e2e-watch-test-watch-closed,UID:efa5d804-4e4f-11ea-a994-fa163e34d433,ResourceVersion:21520957,Generation:0,CreationTimestamp:2020-02-13 10:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 13 10:59:41.889: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-sgqks,SelfLink:/api/v1/namespaces/e2e-tests-watch-sgqks/configmaps/e2e-watch-test-watch-closed,UID:efa5d804-4e4f-11ea-a994-fa163e34d433,ResourceVersion:21520958,Generation:0,CreationTimestamp:2020-02-13 10:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:59:41.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-sgqks" for this suite. Feb 13 10:59:47.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 10:59:48.104: INFO: namespace: e2e-tests-watch-sgqks, resource: bindings, ignored listing per whitelist Feb 13 10:59:48.134: INFO: namespace e2e-tests-watch-sgqks deletion completed in 6.239373169s • [SLOW TEST:6.611 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 10:59:48.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f38c3856-4e4f-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 10:59:48.392: INFO: Waiting up to 5m0s for pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-s4zlq" to be "success or failure" Feb 13 10:59:48.397: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906025ms Feb 13 10:59:50.408: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016328944s Feb 13 10:59:52.420: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027987331s Feb 13 10:59:54.902: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.510222911s Feb 13 10:59:56.928: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535711388s Feb 13 10:59:58.938: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546204809s STEP: Saw pod success Feb 13 10:59:58.938: INFO: Pod "pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 10:59:58.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 10:59:59.676: INFO: Waiting for pod pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007 to disappear Feb 13 10:59:59.696: INFO: Pod pod-secrets-f38d6ab4-4e4f-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 10:59:59.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-s4zlq" for this suite. Feb 13 11:00:05.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:00:05.898: INFO: namespace: e2e-tests-secrets-s4zlq, resource: bindings, ignored listing per whitelist Feb 13 11:00:05.914: INFO: namespace e2e-tests-secrets-s4zlq deletion completed in 6.209567722s • [SLOW TEST:17.780 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:00:05.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:00:06.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-p8t2f" to be "success or failure" Feb 13 11:00:06.117: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.3515ms Feb 13 11:00:08.220: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116987791s Feb 13 11:00:10.239: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135745599s Feb 13 11:00:12.255: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151645928s Feb 13 11:00:14.673: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.570157542s Feb 13 11:00:16.695: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591780497s Feb 13 11:00:18.782: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.678924157s STEP: Saw pod success Feb 13 11:00:18.782: INFO: Pod "downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:00:18.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:00:18.881: INFO: Waiting for pod downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007 to disappear Feb 13 11:00:18.952: INFO: Pod downwardapi-volume-fe1d1dc5-4e4f-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:00:18.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p8t2f" for this suite. Feb 13 11:00:25.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:00:25.029: INFO: namespace: e2e-tests-projected-p8t2f, resource: bindings, ignored listing per whitelist Feb 13 11:00:25.116: INFO: namespace e2e-tests-projected-p8t2f deletion completed in 6.151743908s • [SLOW TEST:19.202 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:00:25.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 13 11:00:25.528: INFO: Waiting up to 5m0s for pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-jqmvb" to be "success or failure" Feb 13 11:00:25.562: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.021516ms Feb 13 11:00:27.598: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070145055s Feb 13 11:00:29.630: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101694102s Feb 13 11:00:31.705: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176418622s Feb 13 11:00:33.727: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198722845s Feb 13 11:00:35.740: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.212067079s STEP: Saw pod success Feb 13 11:00:35.740: INFO: Pod "pod-099acbfc-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:00:35.744: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-099acbfc-4e50-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:00:35.912: INFO: Waiting for pod pod-099acbfc-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:00:35.927: INFO: Pod pod-099acbfc-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:00:35.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jqmvb" for this suite. Feb 13 11:00:44.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:00:44.114: INFO: namespace: e2e-tests-emptydir-jqmvb, resource: bindings, ignored listing per whitelist Feb 13 11:00:44.276: INFO: namespace e2e-tests-emptydir-jqmvb deletion completed in 8.337401211s • [SLOW TEST:19.159 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:00:44.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 13 11:00:44.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:47.430: INFO: stderr: "" Feb 13 11:00:47.430: INFO: stdout: "pod/pause created\n" Feb 13 11:00:47.431: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 13 11:00:47.431: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-pl8zl" to be "running and ready" Feb 13 11:00:47.716: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 284.794015ms Feb 13 11:00:50.105: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.674194038s Feb 13 11:00:52.144: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712958646s Feb 13 11:00:54.382: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.950944795s Feb 13 11:00:56.393: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962184368s Feb 13 11:00:58.408: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.977419849s Feb 13 11:00:58.408: INFO: Pod "pause" satisfied condition "running and ready" Feb 13 11:00:58.408: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 13 11:00:58.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:58.737: INFO: stderr: "" Feb 13 11:00:58.737: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 13 11:00:58.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:58.836: INFO: stderr: "" Feb 13 11:00:58.836: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 13 11:00:58.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:58.972: INFO: stderr: "" Feb 13 11:00:58.972: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 13 11:00:58.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:59.098: INFO: stderr: "" Feb 13 11:00:59.099: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 12s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 13 11:00:59.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:59.298: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 13 11:00:59.299: INFO: stdout: "pod \"pause\" force deleted\n" Feb 13 11:00:59.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-pl8zl' Feb 13 11:00:59.462: INFO: stderr: "No resources found.\n" Feb 13 11:00:59.462: INFO: stdout: "" Feb 13 11:00:59.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-pl8zl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 13 11:00:59.586: INFO: stderr: "" Feb 13 11:00:59.587: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:00:59.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pl8zl" for this suite. Feb 13 11:01:05.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:01:05.791: INFO: namespace: e2e-tests-kubectl-pl8zl, resource: bindings, ignored listing per whitelist Feb 13 11:01:05.796: INFO: namespace e2e-tests-kubectl-pl8zl deletion completed in 6.186088218s • [SLOW TEST:21.520 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:01:05.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 13 11:01:06.648: INFO: Waiting up to 5m0s for pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz" in namespace "e2e-tests-svcaccounts-hlm6l" to be "success or failure" Feb 13 11:01:06.710: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 62.106933ms Feb 13 11:01:08.746: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098122705s Feb 13 11:01:10.755: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107277865s Feb 13 11:01:12.769: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121066565s Feb 13 11:01:14.787: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13898414s Feb 13 11:01:16.798: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150127106s Feb 13 11:01:18.812: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.164453968s Feb 13 11:01:20.828: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.179978317s Feb 13 11:01:22.859: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.211403183s STEP: Saw pod success Feb 13 11:01:22.859: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz" satisfied condition "success or failure" Feb 13 11:01:22.866: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz container token-test: STEP: delete the pod Feb 13 11:01:23.030: INFO: Waiting for pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz to disappear Feb 13 11:01:23.123: INFO: Pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-592wz no longer exists STEP: Creating a pod to test consume service account root CA Feb 13 11:01:23.155: INFO: Waiting up to 5m0s for pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n" in namespace "e2e-tests-svcaccounts-hlm6l" to be "success or failure" Feb 13 11:01:23.197: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 41.608716ms Feb 13 11:01:25.680: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524292927s Feb 13 11:01:27.715: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559653s Feb 13 11:01:29.747: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591463265s Feb 13 11:01:31.760: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60496951s Feb 13 11:01:34.004: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.848230344s Feb 13 11:01:36.655: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 13.499442793s Feb 13 11:01:38.674: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Pending", Reason="", readiness=false. Elapsed: 15.518555705s Feb 13 11:01:40.701: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.546002315s STEP: Saw pod success Feb 13 11:01:40.702: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n" satisfied condition "success or failure" Feb 13 11:01:40.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n container root-ca-test: STEP: delete the pod Feb 13 11:01:41.769: INFO: Waiting for pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n to disappear Feb 13 11:01:41.870: INFO: Pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-zjv6n no longer exists STEP: Creating a pod to test consume service account namespace Feb 13 11:01:41.918: INFO: Waiting up to 5m0s for pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph" in namespace "e2e-tests-svcaccounts-hlm6l" to be "success or failure" Feb 13 11:01:41.947: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 28.655557ms Feb 13 11:01:44.380: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462309595s Feb 13 11:01:46.460: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542081797s Feb 13 11:01:48.523: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.604711424s Feb 13 11:01:50.565: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.647251692s Feb 13 11:01:52.608: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 10.690098733s Feb 13 11:01:54.622: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 12.703458529s Feb 13 11:01:56.983: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 15.064377666s Feb 13 11:01:58.999: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Pending", Reason="", readiness=false. Elapsed: 17.080954979s Feb 13 11:02:01.456: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.537610296s STEP: Saw pod success Feb 13 11:02:01.456: INFO: Pod "pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph" satisfied condition "success or failure" Feb 13 11:02:01.468: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph container namespace-test: STEP: delete the pod Feb 13 11:02:01.981: INFO: Waiting for pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph to disappear Feb 13 11:02:02.063: INFO: Pod pod-service-account-222b58f2-4e50-11ea-aba9-0242ac110007-4zjph no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:02:02.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-hlm6l" for this suite. Feb 13 11:02:10.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:02:10.356: INFO: namespace: e2e-tests-svcaccounts-hlm6l, resource: bindings, ignored listing per whitelist Feb 13 11:02:10.416: INFO: namespace e2e-tests-svcaccounts-hlm6l deletion completed in 8.257111234s • [SLOW TEST:64.620 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:02:10.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 13 11:02:10.671: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:02:10.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pcrl7" for this suite. Feb 13 11:02:17.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:02:17.102: INFO: namespace: e2e-tests-kubectl-pcrl7, resource: bindings, ignored listing per whitelist Feb 13 11:02:17.179: INFO: namespace e2e-tests-kubectl-pcrl7 deletion completed in 6.339976725s • [SLOW TEST:6.763 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:02:17.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4c521abf-4e50-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 11:02:17.385: INFO: Waiting up to 5m0s for pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-xfgc6" to be "success or failure" Feb 13 11:02:17.402: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.003331ms Feb 13 11:02:19.419: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033769837s Feb 13 11:02:21.446: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06118219s Feb 13 11:02:23.764: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379300629s Feb 13 11:02:25.790: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4047172s Feb 13 11:02:27.812: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.427122266s STEP: Saw pod success Feb 13 11:02:27.813: INFO: Pod "pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:02:27.821: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 11:02:27.999: INFO: Waiting for pod pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:02:28.013: INFO: Pod pod-secrets-4c544f77-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:02:28.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xfgc6" for this suite. Feb 13 11:02:34.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:02:34.263: INFO: namespace: e2e-tests-secrets-xfgc6, resource: bindings, ignored listing per whitelist Feb 13 11:02:34.316: INFO: namespace e2e-tests-secrets-xfgc6 deletion completed in 6.263896717s • [SLOW TEST:17.136 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:02:34.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 11:02:34.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-nbqx5' Feb 13 11:02:34.827: INFO: stderr: "" Feb 13 11:02:34.827: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 13 11:02:34.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nbqx5' Feb 13 11:02:40.954: INFO: stderr: "" Feb 13 11:02:40.954: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:02:40.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nbqx5" for this suite. Feb 13 11:02:46.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:02:47.224: INFO: namespace: e2e-tests-kubectl-nbqx5, resource: bindings, ignored listing per whitelist Feb 13 11:02:47.251: INFO: namespace e2e-tests-kubectl-nbqx5 deletion completed in 6.285236193s • [SLOW TEST:12.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:02:47.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 13 11:02:47.484: INFO: Waiting up to 5m0s for pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-xhz8c" to be "success or failure" Feb 13 11:02:47.626: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 142.259962ms Feb 13 11:02:49.647: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162636454s Feb 13 11:02:51.665: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180649127s Feb 13 11:02:53.690: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206515304s Feb 13 11:02:55.708: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223772852s Feb 13 11:02:58.588: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.103852078s Feb 13 11:03:00.618: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.134304335s STEP: Saw pod success Feb 13 11:03:00.618: INFO: Pod "pod-5e4ad335-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:03:00.629: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5e4ad335-4e50-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:03:00.794: INFO: Waiting for pod pod-5e4ad335-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:03:00.805: INFO: Pod pod-5e4ad335-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:03:00.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xhz8c" for this suite. Feb 13 11:03:07.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:03:07.318: INFO: namespace: e2e-tests-emptydir-xhz8c, resource: bindings, ignored listing per whitelist Feb 13 11:03:07.427: INFO: namespace e2e-tests-emptydir-xhz8c deletion completed in 6.612851905s • [SLOW TEST:20.175 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:03:07.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:03:07.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 13 11:03:07.845: INFO: stderr: "" Feb 13 11:03:07.845: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 13 11:03:07.860: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:03:07.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lgbzv" for this suite. Feb 13 11:03:14.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:03:14.177: INFO: namespace: e2e-tests-kubectl-lgbzv, resource: bindings, ignored listing per whitelist Feb 13 11:03:14.221: INFO: namespace e2e-tests-kubectl-lgbzv deletion completed in 6.344962635s S [SKIPPING] [6.794 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:03:07.860: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:03:14.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-6e5256af-4e50-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 11:03:14.461: INFO: Waiting up to 5m0s for pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-4x2tx" to be "success or failure" Feb 13 11:03:14.483: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.110844ms Feb 13 11:03:16.571: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109744022s Feb 13 11:03:18.987: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526252122s Feb 13 11:03:21.009: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547482782s Feb 13 11:03:23.036: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.575334446s Feb 13 11:03:25.060: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.598608656s Feb 13 11:03:27.086: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.624633458s STEP: Saw pod success Feb 13 11:03:27.086: INFO: Pod "pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:03:27.091: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 11:03:27.975: INFO: Waiting for pod pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:03:28.175: INFO: Pod pod-secrets-6e5311a6-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:03:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4x2tx" for this suite. Feb 13 11:03:34.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:03:34.408: INFO: namespace: e2e-tests-secrets-4x2tx, resource: bindings, ignored listing per whitelist Feb 13 11:03:34.408: INFO: namespace e2e-tests-secrets-4x2tx deletion completed in 6.220499131s • [SLOW TEST:20.187 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:03:34.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 13 11:03:45.278: INFO: Successfully updated pod "annotationupdate7a70c723-4e50-11ea-aba9-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:03:47.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ww7pv" for this suite. Feb 13 11:04:11.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:04:11.611: INFO: namespace: e2e-tests-downward-api-ww7pv, resource: bindings, ignored listing per whitelist Feb 13 11:04:11.639: INFO: namespace e2e-tests-downward-api-ww7pv deletion completed in 24.258829693s • [SLOW TEST:37.231 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:04:11.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 13 11:04:11.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 13 11:04:12.054: INFO: stderr: "" Feb 13 11:04:12.054: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:04:12.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vb8d9" for this suite. Feb 13 11:04:18.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:04:18.180: INFO: namespace: e2e-tests-kubectl-vb8d9, resource: bindings, ignored listing per whitelist Feb 13 11:04:18.344: INFO: namespace e2e-tests-kubectl-vb8d9 deletion completed in 6.279698447s • [SLOW TEST:6.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:04:18.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Feb 13 11:04:18.628: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix940616455/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:04:18.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bh2dn" for this suite. Feb 13 11:04:24.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:04:24.789: INFO: namespace: e2e-tests-kubectl-bh2dn, resource: bindings, ignored listing per whitelist Feb 13 11:04:24.977: INFO: namespace e2e-tests-kubectl-bh2dn deletion completed in 6.248808656s • [SLOW TEST:6.632 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:04:24.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 13 11:04:25.163: INFO: Waiting up to 5m0s for pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-containers-cjbpr" to be "success or failure" Feb 13 11:04:25.172: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.366001ms Feb 13 11:04:27.182: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019124493s Feb 13 11:04:29.199: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03585694s Feb 13 11:04:31.222: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059322376s Feb 13 11:04:33.234: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.071243932s Feb 13 11:04:35.252: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089762735s STEP: Saw pod success Feb 13 11:04:35.253: INFO: Pod "client-containers-98869df7-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:04:35.260: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-98869df7-4e50-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:04:35.409: INFO: Waiting for pod client-containers-98869df7-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:04:35.420: INFO: Pod client-containers-98869df7-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:04:35.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-cjbpr" for this suite. Feb 13 11:04:41.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:04:41.627: INFO: namespace: e2e-tests-containers-cjbpr, resource: bindings, ignored listing per whitelist Feb 13 11:04:41.630: INFO: namespace e2e-tests-containers-cjbpr deletion completed in 6.202700753s • [SLOW TEST:16.652 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:04:41.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:04:41.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-n772m" to be "success or failure" Feb 13 11:04:42.021: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 69.14836ms Feb 13 11:04:44.033: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081827113s Feb 13 11:04:46.101: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149044409s Feb 13 11:04:48.116: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164355223s Feb 13 11:04:50.160: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208074304s Feb 13 11:04:52.175: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.223916685s Feb 13 11:04:55.225: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.273062996s STEP: Saw pod success Feb 13 11:04:55.225: INFO: Pod "downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:04:55.471: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:04:55.670: INFO: Waiting for pod downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:04:55.679: INFO: Pod downwardapi-volume-a27ed552-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:04:55.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n772m" for this suite. Feb 13 11:05:01.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:05:01.944: INFO: namespace: e2e-tests-projected-n772m, resource: bindings, ignored listing per whitelist Feb 13 11:05:02.021: INFO: namespace e2e-tests-projected-n772m deletion completed in 6.332318811s • [SLOW TEST:20.390 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:05:02.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 13 11:05:15.007: INFO: Successfully updated pod "annotationupdateaeae1d8b-4e50-11ea-aba9-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:05:17.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mxgm2" for this suite. Feb 13 11:05:41.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:05:41.179: INFO: namespace: e2e-tests-projected-mxgm2, resource: bindings, ignored listing per whitelist Feb 13 11:05:41.308: INFO: namespace e2e-tests-projected-mxgm2 deletion completed in 24.207492648s • [SLOW TEST:39.286 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:05:41.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-4wcs5 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 13 11:05:41.482: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 13 11:06:13.925: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-4wcs5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 11:06:13.926: INFO: >>> kubeConfig: /root/.kube/config I0213 11:06:14.066786 8 log.go:172] (0xc0009c6c60) (0xc00097c780) Create stream I0213 11:06:14.066857 8 log.go:172] (0xc0009c6c60) (0xc00097c780) Stream added, broadcasting: 1 I0213 11:06:14.071730 8 log.go:172] (0xc0009c6c60) Reply frame received for 1 I0213 11:06:14.071755 8 log.go:172] (0xc0009c6c60) (0xc000ceb540) Create stream I0213 11:06:14.071766 8 log.go:172] (0xc0009c6c60) (0xc000ceb540) Stream added, broadcasting: 3 I0213 11:06:14.072758 8 log.go:172] (0xc0009c6c60) Reply frame received for 3 I0213 11:06:14.072779 8 log.go:172] (0xc0009c6c60) (0xc00097c820) Create stream I0213 11:06:14.072787 8 log.go:172] (0xc0009c6c60) (0xc00097c820) Stream added, broadcasting: 5 I0213 11:06:14.073719 8 log.go:172] (0xc0009c6c60) Reply frame received for 5 I0213 11:06:14.277530 8 log.go:172] (0xc0009c6c60) Data frame received for 3 I0213 11:06:14.277638 8 log.go:172] (0xc000ceb540) (3) Data frame handling I0213 11:06:14.277657 8 log.go:172] (0xc000ceb540) (3) Data frame sent I0213 11:06:14.400602 8 log.go:172] (0xc0009c6c60) (0xc000ceb540) Stream removed, broadcasting: 3 I0213 11:06:14.400722 8 log.go:172] (0xc0009c6c60) Data frame received for 1 I0213 11:06:14.400739 8 log.go:172] (0xc00097c780) (1) Data frame handling I0213 11:06:14.400765 8 log.go:172] (0xc00097c780) (1) Data frame sent I0213 11:06:14.401184 8 log.go:172] (0xc0009c6c60) (0xc00097c820) Stream removed, broadcasting: 5 I0213 11:06:14.401326 8 log.go:172] (0xc0009c6c60) (0xc00097c780) Stream removed, broadcasting: 1 I0213 11:06:14.401358 8 log.go:172] (0xc0009c6c60) Go away received I0213 11:06:14.401701 8 log.go:172] (0xc0009c6c60) (0xc00097c780) Stream removed, broadcasting: 1 I0213 11:06:14.401725 8 log.go:172] (0xc0009c6c60) (0xc000ceb540) Stream removed, broadcasting: 3 I0213 11:06:14.401742 8 log.go:172] (0xc0009c6c60) (0xc00097c820) Stream removed, broadcasting: 5 Feb 13 11:06:14.401: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:06:14.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-4wcs5" for this suite. Feb 13 11:06:40.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:06:40.527: INFO: namespace: e2e-tests-pod-network-test-4wcs5, resource: bindings, ignored listing per whitelist Feb 13 11:06:40.632: INFO: namespace e2e-tests-pod-network-test-4wcs5 deletion completed in 26.213201463s • [SLOW TEST:59.324 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:06:40.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 13 11:06:40.914: INFO: Waiting up to 5m0s for pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-vk6h9" to be "success or failure" Feb 13 11:06:40.936: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 21.673026ms Feb 13 11:06:42.996: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081973977s Feb 13 11:06:45.289: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374670725s Feb 13 11:06:47.324: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40953593s Feb 13 11:06:49.351: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436163593s Feb 13 11:06:51.379: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.464931462s Feb 13 11:06:54.364: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.44912664s STEP: Saw pod success Feb 13 11:06:54.364: INFO: Pod "pod-e96da0e5-4e50-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:06:54.375: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e96da0e5-4e50-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:06:54.844: INFO: Waiting for pod pod-e96da0e5-4e50-11ea-aba9-0242ac110007 to disappear Feb 13 11:06:54.866: INFO: Pod pod-e96da0e5-4e50-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:06:54.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vk6h9" for this suite. Feb 13 11:07:00.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:07:01.032: INFO: namespace: e2e-tests-emptydir-vk6h9, resource: bindings, ignored listing per whitelist Feb 13 11:07:01.083: INFO: namespace e2e-tests-emptydir-vk6h9 deletion completed in 6.197825417s • [SLOW TEST:20.451 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:07:01.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:07:01.576: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 13 11:07:07.253: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 13 11:07:09.708: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 13 11:07:11.721: INFO: Creating deployment "test-rollover-deployment" Feb 13 11:07:11.805: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 13 11:07:14.280: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 13 11:07:14.313: INFO: Ensure that both replica sets have 1 created replica Feb 13 11:07:14.333: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 13 11:07:14.356: INFO: Updating deployment test-rollover-deployment Feb 13 11:07:14.356: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 13 11:07:16.799: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 13 11:07:16.810: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 13 11:07:16.819: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:16.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188835, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:18.922: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:18.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188835, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:20.854: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:20.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188835, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:22.963: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:22.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188835, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:24.913: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:24.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188835, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:26.848: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:26.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:28.847: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:28.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:30.865: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:30.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:32.873: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:32.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:34.848: INFO: all replica sets need to contain the pod-template-hash label Feb 13 11:07:34.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188846, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:36.985: INFO: Feb 13 11:07:36.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188856, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717188831, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 11:07:38.845: INFO: Feb 13 11:07:38.845: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 13 11:07:38.861: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-kntvb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kntvb/deployments/test-rollover-deployment,UID:fbcf7fcc-4e50-11ea-a994-fa163e34d433,ResourceVersion:21522081,Generation:2,CreationTimestamp:2020-02-13 11:07:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-13 11:07:11 +0000 UTC 2020-02-13 11:07:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-13 11:07:37 +0000 UTC 2020-02-13 11:07:11 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 13 11:07:38.866: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-kntvb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kntvb/replicasets/test-rollover-deployment-5b8479fdb6,UID:fd63f76f-4e50-11ea-a994-fa163e34d433,ResourceVersion:21522071,Generation:2,CreationTimestamp:2020-02-13 11:07:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fbcf7fcc-4e50-11ea-a994-fa163e34d433 0xc001c624c7 0xc001c624c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 13 11:07:38.867: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 13 11:07:38.867: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-kntvb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kntvb/replicasets/test-rollover-controller,UID:f5bfab4e-4e50-11ea-a994-fa163e34d433,ResourceVersion:21522080,Generation:2,CreationTimestamp:2020-02-13 11:07:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fbcf7fcc-4e50-11ea-a994-fa163e34d433 0xc001c6231f 0xc001c62330}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 11:07:38.867: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-kntvb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kntvb/replicasets/test-rollover-deployment-58494b7559,UID:fbe9a361-4e50-11ea-a994-fa163e34d433,ResourceVersion:21522031,Generation:2,CreationTimestamp:2020-02-13 11:07:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment fbcf7fcc-4e50-11ea-a994-fa163e34d433 0xc001c623f7 0xc001c623f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 11:07:38.874: INFO: Pod "test-rollover-deployment-5b8479fdb6-24g4c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-24g4c,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-kntvb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kntvb/pods/test-rollover-deployment-5b8479fdb6-24g4c,UID:fdbac3e6-4e50-11ea-a994-fa163e34d433,ResourceVersion:21522055,Generation:0,CreationTimestamp:2020-02-13 11:07:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 fd63f76f-4e50-11ea-a994-fa163e34d433 0xc001c63067 0xc001c63068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7g52z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7g52z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-7g52z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c630d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c630f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:07:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:07:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:07:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:07:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-13 11:07:15 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-13 11:07:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b906bca87b32f48011f8d87aa7db2ba8d9454e16e740e4c82c6cefe5e0e344ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:07:38.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kntvb" for this suite. Feb 13 11:07:48.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:07:49.087: INFO: namespace: e2e-tests-deployment-kntvb, resource: bindings, ignored listing per whitelist Feb 13 11:07:49.156: INFO: namespace e2e-tests-deployment-kntvb deletion completed in 10.275256588s • [SLOW TEST:48.073 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:07:49.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0213 11:07:50.957873 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 11:07:50.958: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:07:50.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mbtjx" for this suite. Feb 13 11:07:57.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:07:57.867: INFO: namespace: e2e-tests-gc-mbtjx, resource: bindings, ignored listing per whitelist Feb 13 11:07:57.888: INFO: namespace e2e-tests-gc-mbtjx deletion completed in 6.924828714s • [SLOW TEST:8.731 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:07:57.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 13 11:07:58.095: INFO: Number of nodes with available pods: 0 Feb 13 11:07:58.095: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:07:59.140: INFO: Number of nodes with available pods: 0 Feb 13 11:07:59.141: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:00.115: INFO: Number of nodes with available pods: 0 Feb 13 11:08:00.115: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:01.131: INFO: Number of nodes with available pods: 0 Feb 13 11:08:01.131: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:02.156: INFO: Number of nodes with available pods: 0 Feb 13 11:08:02.156: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:03.151: INFO: Number of nodes with available pods: 0 Feb 13 11:08:03.151: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:04.643: INFO: Number of nodes with available pods: 0 Feb 13 11:08:04.643: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:05.269: INFO: Number of nodes with available pods: 0 Feb 13 11:08:05.269: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:06.222: INFO: Number of nodes with available pods: 0 Feb 13 11:08:06.222: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:07.126: INFO: Number of nodes with available pods: 0 Feb 13 11:08:07.126: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:08.144: INFO: Number of nodes with available pods: 0 Feb 13 11:08:08.144: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:09.189: INFO: Number of nodes with available pods: 1 Feb 13 11:08:09.189: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 13 11:08:09.446: INFO: Number of nodes with available pods: 0 Feb 13 11:08:09.446: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:10.477: INFO: Number of nodes with available pods: 0 Feb 13 11:08:10.477: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:11.668: INFO: Number of nodes with available pods: 0 Feb 13 11:08:11.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:12.496: INFO: Number of nodes with available pods: 0 Feb 13 11:08:12.496: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:13.461: INFO: Number of nodes with available pods: 0 Feb 13 11:08:13.461: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:14.465: INFO: Number of nodes with available pods: 0 Feb 13 11:08:14.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:15.465: INFO: Number of nodes with available pods: 0 Feb 13 11:08:15.465: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:16.482: INFO: Number of nodes with available pods: 0 Feb 13 11:08:16.482: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:17.997: INFO: Number of nodes with available pods: 0 Feb 13 11:08:17.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:18.570: INFO: Number of nodes with available pods: 0 Feb 13 11:08:18.570: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:08:19.471: INFO: Number of nodes with available pods: 1 Feb 13 11:08:19.471: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4ljtr, will wait for the garbage collector to delete the pods Feb 13 11:08:19.562: INFO: Deleting DaemonSet.extensions daemon-set took: 23.113216ms Feb 13 11:08:19.662: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237224ms Feb 13 11:08:27.168: INFO: Number of nodes with available pods: 0 Feb 13 11:08:27.168: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 11:08:27.175: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4ljtr/daemonsets","resourceVersion":"21522252"},"items":null} Feb 13 11:08:27.180: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4ljtr/pods","resourceVersion":"21522252"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:08:27.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4ljtr" for this suite. Feb 13 11:08:35.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:08:35.465: INFO: namespace: e2e-tests-daemonsets-4ljtr, resource: bindings, ignored listing per whitelist Feb 13 11:08:35.534: INFO: namespace e2e-tests-daemonsets-4ljtr deletion completed in 8.33472515s • [SLOW TEST:37.646 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:08:35.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 13 11:08:35.897: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:09:02.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7s285" for this suite. Feb 13 11:09:26.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:09:27.070: INFO: namespace: e2e-tests-init-container-7s285, resource: bindings, ignored listing per whitelist Feb 13 11:09:27.127: INFO: namespace e2e-tests-init-container-7s285 deletion completed in 24.192235642s • [SLOW TEST:51.593 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:09:27.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 13 11:09:29.395: INFO: Waiting up to 5m0s for pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-7kpq4" to be "success or failure" Feb 13 11:09:29.412: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.966727ms Feb 13 11:09:31.447: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052192407s Feb 13 11:09:33.465: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070107121s Feb 13 11:09:36.851: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.456823972s Feb 13 11:09:38.875: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.480269885s Feb 13 11:09:40.899: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.504366313s STEP: Saw pod success Feb 13 11:09:40.899: INFO: Pod "pod-4dd9181d-4e51-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:09:40.904: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4dd9181d-4e51-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:09:41.072: INFO: Waiting for pod pod-4dd9181d-4e51-11ea-aba9-0242ac110007 to disappear Feb 13 11:09:41.091: INFO: Pod pod-4dd9181d-4e51-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:09:41.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7kpq4" for this suite. Feb 13 11:09:47.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:09:47.206: INFO: namespace: e2e-tests-emptydir-7kpq4, resource: bindings, ignored listing per whitelist Feb 13 11:09:47.323: INFO: namespace e2e-tests-emptydir-7kpq4 deletion completed in 6.216936081s • [SLOW TEST:20.196 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:09:47.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 13 11:09:47.568: INFO: Waiting up to 5m0s for pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007" in namespace "e2e-tests-var-expansion-xt2xm" to be "success or failure" Feb 13 11:09:47.586: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.647783ms Feb 13 11:09:49.601: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033511663s Feb 13 11:09:51.617: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049260916s Feb 13 11:09:54.385: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817323974s Feb 13 11:09:56.400: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.831928308s Feb 13 11:10:01.382: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.813800653s Feb 13 11:10:03.397: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.828877741s STEP: Saw pod success Feb 13 11:10:03.397: INFO: Pod "var-expansion-58b06322-4e51-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:10:03.402: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-58b06322-4e51-11ea-aba9-0242ac110007 container dapi-container: STEP: delete the pod Feb 13 11:10:04.116: INFO: Waiting for pod var-expansion-58b06322-4e51-11ea-aba9-0242ac110007 to disappear Feb 13 11:10:04.888: INFO: Pod var-expansion-58b06322-4e51-11ea-aba9-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:10:04.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xt2xm" for this suite. Feb 13 11:10:11.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:10:11.215: INFO: namespace: e2e-tests-var-expansion-xt2xm, resource: bindings, ignored listing per whitelist Feb 13 11:10:11.476: INFO: namespace e2e-tests-var-expansion-xt2xm deletion completed in 6.557583849s • [SLOW TEST:24.152 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:10:11.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rw5fq Feb 13 11:10:21.714: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rw5fq STEP: checking the pod's current state and verifying that restartCount is present Feb 13 11:10:21.718: INFO: Initial restart count of pod liveness-http is 0 Feb 13 11:10:44.133: INFO: Restart count of pod e2e-tests-container-probe-rw5fq/liveness-http is now 1 (22.414970899s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:10:44.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rw5fq" for this suite. Feb 13 11:10:50.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:10:50.473: INFO: namespace: e2e-tests-container-probe-rw5fq, resource: bindings, ignored listing per whitelist Feb 13 11:10:50.641: INFO: namespace e2e-tests-container-probe-rw5fq deletion completed in 6.340840299s • [SLOW TEST:39.165 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:10:50.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 11:10:50.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-gj9kn' Feb 13 11:10:53.067: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 11:10:53.068: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 13 11:10:53.131: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 13 11:10:53.192: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 13 11:10:53.294: INFO: scanned /root for discovery docs: Feb 13 11:10:53.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-gj9kn' Feb 13 11:11:20.798: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 13 11:11:20.798: INFO: stdout: "Created e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77\nScaling up e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 13 11:11:20.798: INFO: stdout: "Created e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77\nScaling up e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 13 11:11:20.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gj9kn' Feb 13 11:11:20.946: INFO: stderr: "" Feb 13 11:11:20.946: INFO: stdout: "e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77-hm7lq " Feb 13 11:11:20.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77-hm7lq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gj9kn' Feb 13 11:11:21.086: INFO: stderr: "" Feb 13 11:11:21.086: INFO: stdout: "true" Feb 13 11:11:21.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77-hm7lq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gj9kn' Feb 13 11:11:21.181: INFO: stderr: "" Feb 13 11:11:21.181: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 13 11:11:21.181: INFO: e2e-test-nginx-rc-576dde2fcaf3502a94643eda6b70ad77-hm7lq is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 13 11:11:21.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-gj9kn' Feb 13 11:11:21.380: INFO: stderr: "" Feb 13 11:11:21.380: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:11:21.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gj9kn" for this suite. Feb 13 11:11:45.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:11:45.534: INFO: namespace: e2e-tests-kubectl-gj9kn, resource: bindings, ignored listing per whitelist Feb 13 11:11:45.571: INFO: namespace e2e-tests-kubectl-gj9kn deletion completed in 24.16821092s • [SLOW TEST:54.929 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:11:45.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 13 11:11:45.893: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-thccv,SelfLink:/api/v1/namespaces/e2e-tests-watch-thccv/configmaps/e2e-watch-test-label-changed,UID:9f294526-4e51-11ea-a994-fa163e34d433,ResourceVersion:21522698,Generation:0,CreationTimestamp:2020-02-13 11:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 13 11:11:45.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-thccv,SelfLink:/api/v1/namespaces/e2e-tests-watch-thccv/configmaps/e2e-watch-test-label-changed,UID:9f294526-4e51-11ea-a994-fa163e34d433,ResourceVersion:21522699,Generation:0,CreationTimestamp:2020-02-13 11:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 13 11:11:45.893: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-thccv,SelfLink:/api/v1/namespaces/e2e-tests-watch-thccv/configmaps/e2e-watch-test-label-changed,UID:9f294526-4e51-11ea-a994-fa163e34d433,ResourceVersion:21522700,Generation:0,CreationTimestamp:2020-02-13 11:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 13 11:11:56.063: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-thccv,SelfLink:/api/v1/namespaces/e2e-tests-watch-thccv/configmaps/e2e-watch-test-label-changed,UID:9f294526-4e51-11ea-a994-fa163e34d433,ResourceVersion:21522714,Generation:0,CreationTimestamp:2020-02-13 11:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 13 11:11:56.064: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-thccv,SelfLink:/api/v1/namespaces/e2e-tests-watch-thccv/configmaps/e2e-watch-test-label-changed,UID:9f294526-4e51-11ea-a994-fa163e34d433,ResourceVersion:21522715,Generation:0,CreationTimestamp:2020-02-13 11:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 13 11:11:56.064: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-thccv,SelfLink:/api/v1/namespaces/e2e-tests-watch-thccv/configmaps/e2e-watch-test-label-changed,UID:9f294526-4e51-11ea-a994-fa163e34d433,ResourceVersion:21522716,Generation:0,CreationTimestamp:2020-02-13 11:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:11:56.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-thccv" for this suite. Feb 13 11:12:02.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:12:02.326: INFO: namespace: e2e-tests-watch-thccv, resource: bindings, ignored listing per whitelist Feb 13 11:12:02.457: INFO: namespace e2e-tests-watch-thccv deletion completed in 6.328707054s • [SLOW TEST:16.886 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:12:02.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Feb 13 11:12:02.904: INFO: Waiting up to 5m0s for pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007" in namespace "e2e-tests-var-expansion-kznc9" to be "success or failure" Feb 13 11:12:02.910: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.838816ms Feb 13 11:12:04.928: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024466175s Feb 13 11:12:06.967: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06372337s Feb 13 11:12:09.502: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598166799s Feb 13 11:12:11.513: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608751246s Feb 13 11:12:13.524: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.62030312s STEP: Saw pod success Feb 13 11:12:13.524: INFO: Pod "var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:12:13.530: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007 container dapi-container: STEP: delete the pod Feb 13 11:12:14.195: INFO: Waiting for pod var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007 to disappear Feb 13 11:12:14.510: INFO: Pod var-expansion-a9471ada-4e51-11ea-aba9-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:12:14.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-kznc9" for this suite. Feb 13 11:12:20.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:12:20.838: INFO: namespace: e2e-tests-var-expansion-kznc9, resource: bindings, ignored listing per whitelist Feb 13 11:12:20.855: INFO: namespace e2e-tests-var-expansion-kznc9 deletion completed in 6.302812582s • [SLOW TEST:18.398 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:12:20.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 13 11:12:21.291: INFO: namespace e2e-tests-kubectl-j78vd Feb 13 11:12:21.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-j78vd' Feb 13 11:12:21.885: INFO: stderr: "" Feb 13 11:12:21.886: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 13 11:12:23.249: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:23.249: INFO: Found 0 / 1 Feb 13 11:12:23.910: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:23.910: INFO: Found 0 / 1 Feb 13 11:12:24.898: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:24.898: INFO: Found 0 / 1 Feb 13 11:12:25.902: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:25.902: INFO: Found 0 / 1 Feb 13 11:12:26.899: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:26.899: INFO: Found 0 / 1 Feb 13 11:12:27.896: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:27.896: INFO: Found 0 / 1 Feb 13 11:12:28.943: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:28.943: INFO: Found 0 / 1 Feb 13 11:12:29.907: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:29.907: INFO: Found 0 / 1 Feb 13 11:12:30.901: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:30.902: INFO: Found 1 / 1 Feb 13 11:12:30.902: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 13 11:12:30.908: INFO: Selector matched 1 pods for map[app:redis] Feb 13 11:12:30.908: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 13 11:12:30.908: INFO: wait on redis-master startup in e2e-tests-kubectl-j78vd Feb 13 11:12:30.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lq57x redis-master --namespace=e2e-tests-kubectl-j78vd' Feb 13 11:12:31.106: INFO: stderr: "" Feb 13 11:12:31.106: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 13 Feb 11:12:29.282 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Feb 11:12:29.282 # Server started, Redis version 3.2.12\n1:M 13 Feb 11:12:29.283 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Feb 11:12:29.283 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 13 11:12:31.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-j78vd' Feb 13 11:12:31.313: INFO: stderr: "" Feb 13 11:12:31.314: INFO: stdout: "service/rm2 exposed\n" Feb 13 11:12:31.412: INFO: Service rm2 in namespace e2e-tests-kubectl-j78vd found. STEP: exposing service Feb 13 11:12:33.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-j78vd' Feb 13 11:12:33.754: INFO: stderr: "" Feb 13 11:12:33.754: INFO: stdout: "service/rm3 exposed\n" Feb 13 11:12:33.889: INFO: Service rm3 in namespace e2e-tests-kubectl-j78vd found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:12:35.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j78vd" for this suite. Feb 13 11:13:00.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:13:00.033: INFO: namespace: e2e-tests-kubectl-j78vd, resource: bindings, ignored listing per whitelist Feb 13 11:13:00.197: INFO: namespace e2e-tests-kubectl-j78vd deletion completed in 24.274256127s • [SLOW TEST:39.340 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:13:00.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 13 11:13:13.226: INFO: Successfully updated pod "labelsupdatecba1e587-4e51-11ea-aba9-0242ac110007" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:13:15.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xjbsw" for this suite. Feb 13 11:13:39.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:13:39.767: INFO: namespace: e2e-tests-projected-xjbsw, resource: bindings, ignored listing per whitelist Feb 13 11:13:39.884: INFO: namespace e2e-tests-projected-xjbsw deletion completed in 24.488189374s • [SLOW TEST:39.687 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:13:39.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007 Feb 13 11:13:40.146: INFO: Pod name my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007: Found 0 pods out of 1 Feb 13 11:13:45.168: INFO: Pod name my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007: Found 1 pods out of 1 Feb 13 11:13:45.168: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007" are running Feb 13 11:13:51.195: INFO: Pod "my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007-gtmxq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:13:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:13:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:13:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:13:40 +0000 UTC Reason: Message:}]) Feb 13 11:13:51.195: INFO: Trying to dial the pod Feb 13 11:13:56.267: INFO: Controller my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007: Got expected result from replica 1 [my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007-gtmxq]: "my-hostname-basic-e34fce88-4e51-11ea-aba9-0242ac110007-gtmxq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:13:56.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5jsqh" for this suite. Feb 13 11:14:02.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:14:02.451: INFO: namespace: e2e-tests-replication-controller-5jsqh, resource: bindings, ignored listing per whitelist Feb 13 11:14:02.621: INFO: namespace e2e-tests-replication-controller-5jsqh deletion completed in 6.334266902s • [SLOW TEST:22.736 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:14:02.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-f0df01e6-4e51-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:14:02.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-bvrnm" to be "success or failure" Feb 13 11:14:02.919: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.596677ms Feb 13 11:14:04.983: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078297158s Feb 13 11:14:07.005: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099804697s Feb 13 11:14:09.720: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.815325361s Feb 13 11:14:11.739: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.834333317s Feb 13 11:14:13.758: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.853428394s Feb 13 11:14:15.773: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.867483214s STEP: Saw pod success Feb 13 11:14:15.773: INFO: Pod "pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:14:15.776: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 11:14:15.935: INFO: Waiting for pod pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007 to disappear Feb 13 11:14:16.769: INFO: Pod pod-configmaps-f0e011ec-4e51-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:14:16.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bvrnm" for this suite. Feb 13 11:14:22.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:14:22.916: INFO: namespace: e2e-tests-configmap-bvrnm, resource: bindings, ignored listing per whitelist Feb 13 11:14:22.981: INFO: namespace e2e-tests-configmap-bvrnm deletion completed in 6.178105395s • [SLOW TEST:20.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:14:22.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9m46g [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-9m46g STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-9m46g Feb 13 11:14:23.357: INFO: Found 0 stateful pods, waiting for 1 Feb 13 11:14:33.405: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 13 11:14:33.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 11:14:34.171: INFO: stderr: "I0213 11:14:33.659995 606 log.go:172] (0xc000746370) (0xc0007c2640) Create stream\nI0213 11:14:33.660312 606 log.go:172] (0xc000746370) (0xc0007c2640) Stream added, broadcasting: 1\nI0213 11:14:33.666666 606 log.go:172] (0xc000746370) Reply frame received for 1\nI0213 11:14:33.666707 606 log.go:172] (0xc000746370) (0xc0005cab40) Create stream\nI0213 11:14:33.666715 606 log.go:172] (0xc000746370) (0xc0005cab40) Stream added, broadcasting: 3\nI0213 11:14:33.668018 606 log.go:172] (0xc000746370) Reply frame received for 3\nI0213 11:14:33.668059 606 log.go:172] (0xc000746370) (0xc0005cac80) Create stream\nI0213 11:14:33.668065 606 log.go:172] (0xc000746370) (0xc0005cac80) Stream added, broadcasting: 5\nI0213 11:14:33.673723 606 log.go:172] (0xc000746370) Reply frame received for 5\nI0213 11:14:33.855282 606 log.go:172] (0xc000746370) Data frame received for 3\nI0213 11:14:33.855446 606 log.go:172] (0xc0005cab40) (3) Data frame handling\nI0213 11:14:33.855500 606 log.go:172] (0xc0005cab40) (3) Data frame sent\nI0213 11:14:34.158307 606 log.go:172] (0xc000746370) (0xc0005cab40) Stream removed, broadcasting: 3\nI0213 11:14:34.158612 606 log.go:172] (0xc000746370) Data frame received for 1\nI0213 11:14:34.158656 606 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0213 11:14:34.158793 606 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0213 11:14:34.158976 606 log.go:172] (0xc000746370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0213 11:14:34.159076 606 log.go:172] (0xc000746370) (0xc0005cac80) Stream removed, broadcasting: 5\nI0213 11:14:34.159201 606 log.go:172] (0xc000746370) Go away received\nI0213 11:14:34.159694 606 log.go:172] (0xc000746370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0213 11:14:34.159709 606 log.go:172] (0xc000746370) (0xc0005cab40) Stream removed, broadcasting: 3\nI0213 11:14:34.159713 606 log.go:172] (0xc000746370) (0xc0005cac80) Stream removed, broadcasting: 5\n" Feb 13 11:14:34.171: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 11:14:34.171: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 11:14:34.209: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 13 11:14:44.264: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 11:14:44.264: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 11:14:44.297: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999655s Feb 13 11:14:45.309: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989642088s Feb 13 11:14:46.373: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977696728s Feb 13 11:14:47.395: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.913096824s Feb 13 11:14:48.411: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.891651523s Feb 13 11:14:49.481: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.875790633s Feb 13 11:14:50.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.805240877s Feb 13 11:14:51.523: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.783301775s Feb 13 11:14:52.563: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.763326759s Feb 13 11:14:53.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 723.91178ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-9m46g Feb 13 11:14:54.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:14:55.140: INFO: stderr: "I0213 11:14:54.816609 629 log.go:172] (0xc00072c370) (0xc000788640) Create stream\nI0213 11:14:54.816921 629 log.go:172] (0xc00072c370) (0xc000788640) Stream added, broadcasting: 1\nI0213 11:14:54.823586 629 log.go:172] (0xc00072c370) Reply frame received for 1\nI0213 11:14:54.823618 629 log.go:172] (0xc00072c370) (0xc000662c80) Create stream\nI0213 11:14:54.823630 629 log.go:172] (0xc00072c370) (0xc000662c80) Stream added, broadcasting: 3\nI0213 11:14:54.824529 629 log.go:172] (0xc00072c370) Reply frame received for 3\nI0213 11:14:54.824555 629 log.go:172] (0xc00072c370) (0xc000780000) Create stream\nI0213 11:14:54.824564 629 log.go:172] (0xc00072c370) (0xc000780000) Stream added, broadcasting: 5\nI0213 11:14:54.825161 629 log.go:172] (0xc00072c370) Reply frame received for 5\nI0213 11:14:54.945810 629 log.go:172] (0xc00072c370) Data frame received for 3\nI0213 11:14:54.945991 629 log.go:172] (0xc000662c80) (3) Data frame handling\nI0213 11:14:54.946031 629 log.go:172] (0xc000662c80) (3) Data frame sent\nI0213 11:14:55.128489 629 log.go:172] (0xc00072c370) (0xc000662c80) Stream removed, broadcasting: 3\nI0213 11:14:55.128768 629 log.go:172] (0xc00072c370) Data frame received for 1\nI0213 11:14:55.128838 629 log.go:172] (0xc000788640) (1) Data frame handling\nI0213 11:14:55.129089 629 log.go:172] (0xc000788640) (1) Data frame sent\nI0213 11:14:55.129234 629 log.go:172] (0xc00072c370) (0xc000780000) Stream removed, broadcasting: 5\nI0213 11:14:55.129322 629 log.go:172] (0xc00072c370) (0xc000788640) Stream removed, broadcasting: 1\nI0213 11:14:55.129350 629 log.go:172] (0xc00072c370) Go away received\nI0213 11:14:55.130328 629 log.go:172] (0xc00072c370) (0xc000788640) Stream removed, broadcasting: 1\nI0213 11:14:55.130378 629 log.go:172] (0xc00072c370) (0xc000662c80) Stream removed, broadcasting: 3\nI0213 11:14:55.130410 629 log.go:172] (0xc00072c370) (0xc000780000) Stream removed, broadcasting: 5\n" Feb 13 11:14:55.140: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 11:14:55.140: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 11:14:55.151: INFO: Found 1 stateful pods, waiting for 3 Feb 13 11:15:05.175: INFO: Found 2 stateful pods, waiting for 3 Feb 13 11:15:15.169: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:15:15.169: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:15:15.169: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 13 11:15:25.168: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:15:25.168: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:15:25.168: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 13 11:15:25.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 11:15:25.765: INFO: stderr: "I0213 11:15:25.470715 651 log.go:172] (0xc00015c790) (0xc00068d220) Create stream\nI0213 11:15:25.471402 651 log.go:172] (0xc00015c790) (0xc00068d220) Stream added, broadcasting: 1\nI0213 11:15:25.480670 651 log.go:172] (0xc00015c790) Reply frame received for 1\nI0213 11:15:25.480860 651 log.go:172] (0xc00015c790) (0xc000768000) Create stream\nI0213 11:15:25.480885 651 log.go:172] (0xc00015c790) (0xc000768000) Stream added, broadcasting: 3\nI0213 11:15:25.482668 651 log.go:172] (0xc00015c790) Reply frame received for 3\nI0213 11:15:25.482710 651 log.go:172] (0xc00015c790) (0xc00068d2c0) Create stream\nI0213 11:15:25.482729 651 log.go:172] (0xc00015c790) (0xc00068d2c0) Stream added, broadcasting: 5\nI0213 11:15:25.485097 651 log.go:172] (0xc00015c790) Reply frame received for 5\nI0213 11:15:25.602367 651 log.go:172] (0xc00015c790) Data frame received for 3\nI0213 11:15:25.602536 651 log.go:172] (0xc000768000) (3) Data frame handling\nI0213 11:15:25.602600 651 log.go:172] (0xc000768000) (3) Data frame sent\nI0213 11:15:25.747309 651 log.go:172] (0xc00015c790) (0xc000768000) Stream removed, broadcasting: 3\nI0213 11:15:25.747570 651 log.go:172] (0xc00015c790) Data frame received for 1\nI0213 11:15:25.747617 651 log.go:172] (0xc00068d220) (1) Data frame handling\nI0213 11:15:25.747636 651 log.go:172] (0xc00068d220) (1) Data frame sent\nI0213 11:15:25.747652 651 log.go:172] (0xc00015c790) (0xc00068d220) Stream removed, broadcasting: 1\nI0213 11:15:25.747797 651 log.go:172] (0xc00015c790) (0xc00068d2c0) Stream removed, broadcasting: 5\nI0213 11:15:25.747971 651 log.go:172] (0xc00015c790) Go away received\nI0213 11:15:25.748737 651 log.go:172] (0xc00015c790) (0xc00068d220) Stream removed, broadcasting: 1\nI0213 11:15:25.748775 651 log.go:172] (0xc00015c790) (0xc000768000) Stream removed, broadcasting: 3\nI0213 11:15:25.748794 651 log.go:172] (0xc00015c790) (0xc00068d2c0) Stream removed, broadcasting: 5\n" Feb 13 11:15:25.765: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 11:15:25.765: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 11:15:25.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 11:15:26.906: INFO: stderr: "I0213 11:15:26.180414 674 log.go:172] (0xc0005ec0b0) (0xc000888640) Create stream\nI0213 11:15:26.180979 674 log.go:172] (0xc0005ec0b0) (0xc000888640) Stream added, broadcasting: 1\nI0213 11:15:26.260655 674 log.go:172] (0xc0005ec0b0) Reply frame received for 1\nI0213 11:15:26.260968 674 log.go:172] (0xc0005ec0b0) (0xc0006ee000) Create stream\nI0213 11:15:26.261000 674 log.go:172] (0xc0005ec0b0) (0xc0006ee000) Stream added, broadcasting: 3\nI0213 11:15:26.263617 674 log.go:172] (0xc0005ec0b0) Reply frame received for 3\nI0213 11:15:26.263680 674 log.go:172] (0xc0005ec0b0) (0xc0002ccaa0) Create stream\nI0213 11:15:26.263699 674 log.go:172] (0xc0005ec0b0) (0xc0002ccaa0) Stream added, broadcasting: 5\nI0213 11:15:26.269395 674 log.go:172] (0xc0005ec0b0) Reply frame received for 5\nI0213 11:15:26.698992 674 log.go:172] (0xc0005ec0b0) Data frame received for 3\nI0213 11:15:26.699173 674 log.go:172] (0xc0006ee000) (3) Data frame handling\nI0213 11:15:26.699221 674 log.go:172] (0xc0006ee000) (3) Data frame sent\nI0213 11:15:26.891605 674 log.go:172] (0xc0005ec0b0) Data frame received for 1\nI0213 11:15:26.891786 674 log.go:172] (0xc0005ec0b0) (0xc0006ee000) Stream removed, broadcasting: 3\nI0213 11:15:26.891842 674 log.go:172] (0xc000888640) (1) Data frame handling\nI0213 11:15:26.891871 674 log.go:172] (0xc000888640) (1) Data frame sent\nI0213 11:15:26.891996 674 log.go:172] (0xc0005ec0b0) (0xc0002ccaa0) Stream removed, broadcasting: 5\nI0213 11:15:26.892081 674 log.go:172] (0xc0005ec0b0) (0xc000888640) Stream removed, broadcasting: 1\nI0213 11:15:26.892120 674 log.go:172] (0xc0005ec0b0) Go away received\nI0213 11:15:26.892739 674 log.go:172] (0xc0005ec0b0) (0xc000888640) Stream removed, broadcasting: 1\nI0213 11:15:26.892757 674 log.go:172] (0xc0005ec0b0) (0xc0006ee000) Stream removed, broadcasting: 3\nI0213 11:15:26.892771 674 log.go:172] (0xc0005ec0b0) (0xc0002ccaa0) Stream removed, broadcasting: 5\n" Feb 13 11:15:26.906: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 11:15:26.906: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 11:15:26.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 11:15:27.591: INFO: stderr: "I0213 11:15:27.160222 696 log.go:172] (0xc000138160) (0xc00067f2c0) Create stream\nI0213 11:15:27.160549 696 log.go:172] (0xc000138160) (0xc00067f2c0) Stream added, broadcasting: 1\nI0213 11:15:27.165460 696 log.go:172] (0xc000138160) Reply frame received for 1\nI0213 11:15:27.165548 696 log.go:172] (0xc000138160) (0xc0005ac000) Create stream\nI0213 11:15:27.165564 696 log.go:172] (0xc000138160) (0xc0005ac000) Stream added, broadcasting: 3\nI0213 11:15:27.166249 696 log.go:172] (0xc000138160) Reply frame received for 3\nI0213 11:15:27.166267 696 log.go:172] (0xc000138160) (0xc000736000) Create stream\nI0213 11:15:27.166274 696 log.go:172] (0xc000138160) (0xc000736000) Stream added, broadcasting: 5\nI0213 11:15:27.166940 696 log.go:172] (0xc000138160) Reply frame received for 5\nI0213 11:15:27.349059 696 log.go:172] (0xc000138160) Data frame received for 3\nI0213 11:15:27.349268 696 log.go:172] (0xc0005ac000) (3) Data frame handling\nI0213 11:15:27.349316 696 log.go:172] (0xc0005ac000) (3) Data frame sent\nI0213 11:15:27.580804 696 log.go:172] (0xc000138160) Data frame received for 1\nI0213 11:15:27.580912 696 log.go:172] (0xc000138160) (0xc000736000) Stream removed, broadcasting: 5\nI0213 11:15:27.580997 696 log.go:172] (0xc00067f2c0) (1) Data frame handling\nI0213 11:15:27.581017 696 log.go:172] (0xc00067f2c0) (1) Data frame sent\nI0213 11:15:27.581029 696 log.go:172] (0xc000138160) (0xc0005ac000) Stream removed, broadcasting: 3\nI0213 11:15:27.581051 696 log.go:172] (0xc000138160) (0xc00067f2c0) Stream removed, broadcasting: 1\nI0213 11:15:27.581062 696 log.go:172] (0xc000138160) Go away received\nI0213 11:15:27.581819 696 log.go:172] (0xc000138160) (0xc00067f2c0) Stream removed, broadcasting: 1\nI0213 11:15:27.581836 696 log.go:172] (0xc000138160) (0xc0005ac000) Stream removed, broadcasting: 3\nI0213 11:15:27.581845 696 log.go:172] (0xc000138160) (0xc000736000) Stream removed, broadcasting: 5\n" Feb 13 11:15:27.591: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 11:15:27.592: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 11:15:27.592: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 11:15:27.643: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 13 11:15:37.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 11:15:37.721: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 13 11:15:37.721: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 13 11:15:37.768: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998455s Feb 13 11:15:38.810: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983385947s Feb 13 11:15:39.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.941719865s Feb 13 11:15:40.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.917567067s Feb 13 11:15:41.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.86778824s Feb 13 11:15:42.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.83931218s Feb 13 11:15:43.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.821716355s Feb 13 11:15:44.978: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.795317761s Feb 13 11:15:46.050: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.773868264s Feb 13 11:15:47.062: INFO: Verifying statefulset ss doesn't scale past 3 for another 702.135601ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-9m46g Feb 13 11:15:48.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:15:48.758: INFO: stderr: "I0213 11:15:48.316091 718 log.go:172] (0xc000138840) (0xc000744640) Create stream\nI0213 11:15:48.316440 718 log.go:172] (0xc000138840) (0xc000744640) Stream added, broadcasting: 1\nI0213 11:15:48.326488 718 log.go:172] (0xc000138840) Reply frame received for 1\nI0213 11:15:48.326537 718 log.go:172] (0xc000138840) (0xc0007446e0) Create stream\nI0213 11:15:48.326570 718 log.go:172] (0xc000138840) (0xc0007446e0) Stream added, broadcasting: 3\nI0213 11:15:48.328947 718 log.go:172] (0xc000138840) Reply frame received for 3\nI0213 11:15:48.328980 718 log.go:172] (0xc000138840) (0xc00068ee60) Create stream\nI0213 11:15:48.328992 718 log.go:172] (0xc000138840) (0xc00068ee60) Stream added, broadcasting: 5\nI0213 11:15:48.330378 718 log.go:172] (0xc000138840) Reply frame received for 5\nI0213 11:15:48.591733 718 log.go:172] (0xc000138840) Data frame received for 3\nI0213 11:15:48.591999 718 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0213 11:15:48.592050 718 log.go:172] (0xc0007446e0) (3) Data frame sent\nI0213 11:15:48.739509 718 log.go:172] (0xc000138840) Data frame received for 1\nI0213 11:15:48.739796 718 log.go:172] (0xc000744640) (1) Data frame handling\nI0213 11:15:48.739896 718 log.go:172] (0xc000744640) (1) Data frame sent\nI0213 11:15:48.741403 718 log.go:172] (0xc000138840) (0xc0007446e0) Stream removed, broadcasting: 3\nI0213 11:15:48.741545 718 log.go:172] (0xc000138840) (0xc000744640) Stream removed, broadcasting: 1\nI0213 11:15:48.742348 718 log.go:172] (0xc000138840) (0xc00068ee60) Stream removed, broadcasting: 5\nI0213 11:15:48.742414 718 log.go:172] (0xc000138840) (0xc000744640) Stream removed, broadcasting: 1\nI0213 11:15:48.742436 718 log.go:172] (0xc000138840) (0xc0007446e0) Stream removed, broadcasting: 3\nI0213 11:15:48.742453 718 log.go:172] (0xc000138840) (0xc00068ee60) Stream removed, broadcasting: 5\n" Feb 13 11:15:48.758: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 11:15:48.758: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 11:15:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:15:49.501: INFO: stderr: "I0213 11:15:48.981848 740 log.go:172] (0xc000138790) (0xc0006732c0) Create stream\nI0213 11:15:48.982208 740 log.go:172] (0xc000138790) (0xc0006732c0) Stream added, broadcasting: 1\nI0213 11:15:48.988452 740 log.go:172] (0xc000138790) Reply frame received for 1\nI0213 11:15:48.988488 740 log.go:172] (0xc000138790) (0xc000360000) Create stream\nI0213 11:15:48.988496 740 log.go:172] (0xc000138790) (0xc000360000) Stream added, broadcasting: 3\nI0213 11:15:48.989281 740 log.go:172] (0xc000138790) Reply frame received for 3\nI0213 11:15:48.989304 740 log.go:172] (0xc000138790) (0xc000673360) Create stream\nI0213 11:15:48.989309 740 log.go:172] (0xc000138790) (0xc000673360) Stream added, broadcasting: 5\nI0213 11:15:48.989923 740 log.go:172] (0xc000138790) Reply frame received for 5\nI0213 11:15:49.208823 740 log.go:172] (0xc000138790) Data frame received for 3\nI0213 11:15:49.209013 740 log.go:172] (0xc000360000) (3) Data frame handling\nI0213 11:15:49.209058 740 log.go:172] (0xc000360000) (3) Data frame sent\nI0213 11:15:49.493896 740 log.go:172] (0xc000138790) Data frame received for 1\nI0213 11:15:49.494003 740 log.go:172] (0xc000138790) (0xc000360000) Stream removed, broadcasting: 3\nI0213 11:15:49.494151 740 log.go:172] (0xc0006732c0) (1) Data frame handling\nI0213 11:15:49.494187 740 log.go:172] (0xc0006732c0) (1) Data frame sent\nI0213 11:15:49.494341 740 log.go:172] (0xc000138790) (0xc000673360) Stream removed, broadcasting: 5\nI0213 11:15:49.494603 740 log.go:172] (0xc000138790) (0xc0006732c0) Stream removed, broadcasting: 1\nI0213 11:15:49.494634 740 log.go:172] (0xc000138790) Go away received\nI0213 11:15:49.495174 740 log.go:172] (0xc000138790) (0xc0006732c0) Stream removed, broadcasting: 1\nI0213 11:15:49.495192 740 log.go:172] (0xc000138790) (0xc000360000) Stream removed, broadcasting: 3\nI0213 11:15:49.495203 740 log.go:172] (0xc000138790) (0xc000673360) Stream removed, broadcasting: 5\n" Feb 13 11:15:49.502: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 11:15:49.502: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 11:15:49.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:15:49.957: INFO: rc: 126 Feb 13 11:15:49.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown I0213 11:15:49.710641 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Create stream I0213 11:15:49.710802 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Stream added, broadcasting: 1 I0213 11:15:49.714418 762 log.go:172] (0xc0001a44d0) Reply frame received for 1 I0213 11:15:49.714472 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Create stream I0213 11:15:49.714488 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Stream added, broadcasting: 3 I0213 11:15:49.715415 762 log.go:172] (0xc0001a44d0) Reply frame received for 3 I0213 11:15:49.715439 762 log.go:172] (0xc0001a44d0) (0xc000640000) Create stream I0213 11:15:49.715446 762 log.go:172] (0xc0001a44d0) (0xc000640000) Stream added, broadcasting: 5 I0213 11:15:49.716220 762 log.go:172] (0xc0001a44d0) Reply frame received for 5 I0213 11:15:49.949843 762 log.go:172] (0xc0001a44d0) Data frame received for 3 I0213 11:15:49.949905 762 log.go:172] (0xc00064d5e0) (3) Data frame handling I0213 11:15:49.949925 762 log.go:172] (0xc00064d5e0) (3) Data frame sent I0213 11:15:49.951609 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Stream removed, broadcasting: 3 I0213 11:15:49.951772 762 log.go:172] (0xc0001a44d0) Data frame received for 1 I0213 11:15:49.951797 762 log.go:172] (0xc00064d540) (1) Data frame handling I0213 11:15:49.951819 762 log.go:172] (0xc00064d540) (1) Data frame sent I0213 11:15:49.951843 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Stream removed, broadcasting: 1 I0213 11:15:49.952333 762 log.go:172] (0xc0001a44d0) (0xc000640000) Stream removed, broadcasting: 5 I0213 11:15:49.952375 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Stream removed, broadcasting: 1 I0213 11:15:49.952398 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Stream removed, broadcasting: 3 I0213 11:15:49.952418 762 log.go:172] (0xc0001a44d0) (0xc000640000) Stream removed, broadcasting: 5 I0213 11:15:49.952751 762 log.go:172] (0xc0001a44d0) Go away received command terminated with exit code 126 [] 0xc0016d3bc0 exit status 126 true [0xc0000e8b18 0xc0000e8bd0 0xc0000e8c40] [0xc0000e8b18 0xc0000e8bd0 0xc0000e8c40] [0xc0000e8bc8 0xc0000e8c38] [0x935700 0x935700] 0xc00160cd80 }: Command stdout: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown stderr: I0213 11:15:49.710641 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Create stream I0213 11:15:49.710802 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Stream added, broadcasting: 1 I0213 11:15:49.714418 762 log.go:172] (0xc0001a44d0) Reply frame received for 1 I0213 11:15:49.714472 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Create stream I0213 11:15:49.714488 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Stream added, broadcasting: 3 I0213 11:15:49.715415 762 log.go:172] (0xc0001a44d0) Reply frame received for 3 I0213 11:15:49.715439 762 log.go:172] (0xc0001a44d0) (0xc000640000) Create stream I0213 11:15:49.715446 762 log.go:172] (0xc0001a44d0) (0xc000640000) Stream added, broadcasting: 5 I0213 11:15:49.716220 762 log.go:172] (0xc0001a44d0) Reply frame received for 5 I0213 11:15:49.949843 762 log.go:172] (0xc0001a44d0) Data frame received for 3 I0213 11:15:49.949905 762 log.go:172] (0xc00064d5e0) (3) Data frame handling I0213 11:15:49.949925 762 log.go:172] (0xc00064d5e0) (3) Data frame sent I0213 11:15:49.951609 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Stream removed, broadcasting: 3 I0213 11:15:49.951772 762 log.go:172] (0xc0001a44d0) Data frame received for 1 I0213 11:15:49.951797 762 log.go:172] (0xc00064d540) (1) Data frame handling I0213 11:15:49.951819 762 log.go:172] (0xc00064d540) (1) Data frame sent I0213 11:15:49.951843 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Stream removed, broadcasting: 1 I0213 11:15:49.952333 762 log.go:172] (0xc0001a44d0) (0xc000640000) Stream removed, broadcasting: 5 I0213 11:15:49.952375 762 log.go:172] (0xc0001a44d0) (0xc00064d540) Stream removed, broadcasting: 1 I0213 11:15:49.952398 762 log.go:172] (0xc0001a44d0) (0xc00064d5e0) Stream removed, broadcasting: 3 I0213 11:15:49.952418 762 log.go:172] (0xc0001a44d0) (0xc000640000) Stream removed, broadcasting: 5 I0213 11:15:49.952751 762 log.go:172] (0xc0001a44d0) Go away received command terminated with exit code 126 error: exit status 126 Feb 13 11:15:59.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:16:00.245: INFO: rc: 1 Feb 13 11:16:00.245: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001cde390 exit status 1 true [0xc0010cc118 0xc0010cc130 0xc0010cc148] [0xc0010cc118 0xc0010cc130 0xc0010cc148] [0xc0010cc128 0xc0010cc140] [0x935700 0x935700] 0xc0013e5740 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 13 11:16:10.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:16:10.677: INFO: rc: 1 Feb 13 11:16:10.678: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016d3d10 exit status 1 true [0xc0000e8c58 0xc0000e8c90 0xc0000e8cf0] [0xc0000e8c58 0xc0000e8c90 0xc0000e8cf0] [0xc0000e8c88 0xc0000e8cd0] [0x935700 0x935700] 0xc000d7e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:16:20.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:16:20.865: INFO: rc: 1 Feb 13 11:16:20.866: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016d3e60 exit status 1 true [0xc0000e8d00 0xc0000e8d28 0xc0000e8d70] [0xc0000e8d00 0xc0000e8d28 0xc0000e8d70] [0xc0000e8d18 0xc0000e8d58] [0x935700 0x935700] 0xc000d7e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:16:30.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:16:31.052: INFO: rc: 1 Feb 13 11:16:31.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016d3f80 exit status 1 true [0xc0000e8d88 0xc0000e8dd8 0xc0000e8e40] [0xc0000e8d88 0xc0000e8dd8 0xc0000e8e40] [0xc0000e8da8 0xc0000e8e38] [0x935700 0x935700] 0xc000d7e9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:16:41.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:16:41.227: INFO: rc: 1 Feb 13 11:16:41.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001e66120 exit status 1 true [0xc0019f4008 0xc0019f4020 0xc0019f4038] [0xc0019f4008 0xc0019f4020 0xc0019f4038] [0xc0019f4018 0xc0019f4030] [0x935700 0x935700] 0xc0016ce300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:16:51.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:16:51.323: INFO: rc: 1 Feb 13 11:16:51.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e68270 exit status 1 true [0xc000184000 0xc000184298 0xc000184368] [0xc000184000 0xc000184298 0xc000184368] [0xc000184220 0xc000184348] [0x935700 0x935700] 0xc001c5ed20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:17:01.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:17:01.471: INFO: rc: 1 Feb 13 11:17:01.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8120 exit status 1 true [0xc0019f4040 0xc0019f4058 0xc0019f4070] [0xc0019f4040 0xc0019f4058 0xc0019f4070] [0xc0019f4050 0xc0019f4068] [0x935700 0x935700] 0xc0014c8360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:17:11.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:17:11.592: INFO: rc: 1 Feb 13 11:17:11.592: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8240 exit status 1 true [0xc0019f4078 0xc0019f4090 0xc0019f40a8] [0xc0019f4078 0xc0019f4090 0xc0019f40a8] [0xc0019f4088 0xc0019f40a0] [0x935700 0x935700] 0xc0014c86c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:17:21.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:17:21.787: INFO: rc: 1 Feb 13 11:17:21.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8570 exit status 1 true [0xc0019f40b0 0xc0019f40c8 0xc0019f40e0] [0xc0019f40b0 0xc0019f40c8 0xc0019f40e0] [0xc0019f40c0 0xc0019f40d8] [0x935700 0x935700] 0xc0014c8b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:17:31.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:17:31.963: INFO: rc: 1 Feb 13 11:17:31.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af86f0 exit status 1 true [0xc0019f40e8 0xc0019f4100 0xc0019f4118] [0xc0019f40e8 0xc0019f4100 0xc0019f4118] [0xc0019f40f8 0xc0019f4110] [0x935700 0x935700] 0xc00160c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:17:41.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:17:42.135: INFO: rc: 1 Feb 13 11:17:42.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af88a0 exit status 1 true [0xc0019f4120 0xc0019f4138 0xc0019f4150] [0xc0019f4120 0xc0019f4138 0xc0019f4150] [0xc0019f4130 0xc0019f4148] [0x935700 0x935700] 0xc00160c7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:17:52.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:17:52.298: INFO: rc: 1 Feb 13 11:17:52.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af89f0 exit status 1 true [0xc0019f4158 0xc0019f4170 0xc0019f4188] [0xc0019f4158 0xc0019f4170 0xc0019f4188] [0xc0019f4168 0xc0019f4180] [0x935700 0x935700] 0xc00160cae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:18:02.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:18:02.446: INFO: rc: 1 Feb 13 11:18:02.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012bc120 exit status 1 true [0xc0010cc000 0xc0010cc018 0xc0010cc030] [0xc0010cc000 0xc0010cc018 0xc0010cc030] [0xc0010cc010 0xc0010cc028] [0x935700 0x935700] 0xc001ebf320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:18:12.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:18:12.618: INFO: rc: 1 Feb 13 11:18:12.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8b40 exit status 1 true [0xc0019f4190 0xc0019f41a8 0xc0019f41c0] [0xc0019f4190 0xc0019f41a8 0xc0019f41c0] [0xc0019f41a0 0xc0019f41b8] [0x935700 0x935700] 0xc00160cd80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:18:22.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:18:22.723: INFO: rc: 1 Feb 13 11:18:22.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016d2120 exit status 1 true [0xc0000e8268 0xc0000e87a0 0xc0000e87f8] [0xc0000e8268 0xc0000e87a0 0xc0000e87f8] [0xc0000e8778 0xc0000e87c0] [0x935700 0x935700] 0xc00148fe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:18:32.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:18:32.971: INFO: rc: 1 Feb 13 11:18:32.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8cf0 exit status 1 true [0xc0019f41c8 0xc0019f41e0 0xc0019f41f8] [0xc0019f41c8 0xc0019f41e0 0xc0019f41f8] [0xc0019f41d8 0xc0019f41f0] [0x935700 0x935700] 0xc000e12060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:18:42.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:18:43.155: INFO: rc: 1 Feb 13 11:18:43.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012bc150 exit status 1 true [0xc0000e8770 0xc0000e87b8 0xc0000e8820] [0xc0000e8770 0xc0000e87b8 0xc0000e8820] [0xc0000e87a0 0xc0000e87f8] [0x935700 0x935700] 0xc00148fe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:18:53.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:18:53.315: INFO: rc: 1 Feb 13 11:18:53.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e68300 exit status 1 true [0xc0019f4000 0xc0019f4018 0xc0019f4030] [0xc0019f4000 0xc0019f4018 0xc0019f4030] [0xc0019f4010 0xc0019f4028] [0x935700 0x935700] 0xc00160c2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:19:03.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:19:03.471: INFO: rc: 1 Feb 13 11:19:03.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8180 exit status 1 true [0xc0010cc000 0xc0010cc018 0xc0010cc030] [0xc0010cc000 0xc0010cc018 0xc0010cc030] [0xc0010cc010 0xc0010cc028] [0x935700 0x935700] 0xc0014c8360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:19:13.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:19:13.614: INFO: rc: 1 Feb 13 11:19:13.615: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8420 exit status 1 true [0xc0010cc038 0xc0010cc050 0xc0010cc068] [0xc0010cc038 0xc0010cc050 0xc0010cc068] [0xc0010cc048 0xc0010cc060] [0x935700 0x935700] 0xc0014c86c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:19:23.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:19:23.801: INFO: rc: 1 Feb 13 11:19:23.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af8600 exit status 1 true [0xc0010cc070 0xc0010cc088 0xc0010cc0a0] [0xc0010cc070 0xc0010cc088 0xc0010cc0a0] [0xc0010cc080 0xc0010cc098] [0x935700 0x935700] 0xc0014c8b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:19:33.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:19:33.964: INFO: rc: 1 Feb 13 11:19:33.965: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012bc300 exit status 1 true [0xc0000e8858 0xc0000e88b8 0xc0000e8950] [0xc0000e8858 0xc0000e88b8 0xc0000e8950] [0xc0000e8898 0xc0000e8930] [0x935700 0x935700] 0xc001ebf320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:19:43.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:19:44.135: INFO: rc: 1 Feb 13 11:19:44.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012bc480 exit status 1 true [0xc0000e8968 0xc0000e89a8 0xc0000e89f0] [0xc0000e8968 0xc0000e89a8 0xc0000e89f0] [0xc0000e8998 0xc0000e89c8] [0x935700 0x935700] 0xc001ebf860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:19:54.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:19:54.257: INFO: rc: 1 Feb 13 11:19:54.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e68450 exit status 1 true [0xc0019f4038 0xc0019f4050 0xc0019f4068] [0xc0019f4038 0xc0019f4050 0xc0019f4068] [0xc0019f4048 0xc0019f4060] [0x935700 0x935700] 0xc00160c840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:20:04.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:20:04.394: INFO: rc: 1 Feb 13 11:20:04.395: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016d2150 exit status 1 true [0xc000184000 0xc000184298 0xc000184368] [0xc000184000 0xc000184298 0xc000184368] [0xc000184220 0xc000184348] [0x935700 0x935700] 0xc000e12240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:20:14.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:20:14.604: INFO: rc: 1 Feb 13 11:20:14.605: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e685a0 exit status 1 true [0xc0019f4070 0xc0019f4088 0xc0019f40a0] [0xc0019f4070 0xc0019f4088 0xc0019f40a0] [0xc0019f4080 0xc0019f4098] [0x935700 0x935700] 0xc00160cb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:20:24.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:20:24.749: INFO: rc: 1 Feb 13 11:20:24.749: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e686f0 exit status 1 true [0xc0019f40a8 0xc0019f40c0 0xc0019f40d8] [0xc0019f40a8 0xc0019f40c0 0xc0019f40d8] [0xc0019f40b8 0xc0019f40d0] [0x935700 0x935700] 0xc00160cde0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:20:34.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:20:34.935: INFO: rc: 1 Feb 13 11:20:34.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0016d2450 exit status 1 true [0xc000184510 0xc0001845f0 0xc000184630] [0xc000184510 0xc0001845f0 0xc000184630] [0xc0001845c0 0xc000184608] [0x935700 0x935700] 0xc000e12780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:20:44.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:20:45.043: INFO: rc: 1 Feb 13 11:20:45.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e68270 exit status 1 true [0xc0019f4008 0xc0019f4020 0xc0019f4038] [0xc0019f4008 0xc0019f4020 0xc0019f4038] [0xc0019f4018 0xc0019f4030] [0x935700 0x935700] 0xc00148fe00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 13 11:20:55.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9m46g ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:20:55.244: INFO: rc: 1 Feb 13 11:20:55.244: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Feb 13 11:20:55.244: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 13 11:20:55.267: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9m46g Feb 13 11:20:55.272: INFO: Scaling statefulset ss to 0 Feb 13 11:20:55.286: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 11:20:55.289: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:20:55.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9m46g" for this suite. Feb 13 11:21:03.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:21:03.418: INFO: namespace: e2e-tests-statefulset-9m46g, resource: bindings, ignored listing per whitelist Feb 13 11:21:03.462: INFO: namespace e2e-tests-statefulset-9m46g deletion completed in 8.140422585s • [SLOW TEST:400.481 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:21:03.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ec4a99ba-4e52-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 11:21:04.935: INFO: Waiting up to 5m0s for pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-jvknm" to be "success or failure" Feb 13 11:21:05.019: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 84.482546ms Feb 13 11:21:07.072: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136923307s Feb 13 11:21:09.112: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177444547s Feb 13 11:21:11.212: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277138602s Feb 13 11:21:13.231: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296572937s Feb 13 11:21:15.244: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.309095987s STEP: Saw pod success Feb 13 11:21:15.244: INFO: Pod "pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:21:15.249: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 11:21:16.085: INFO: Waiting for pod pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007 to disappear Feb 13 11:21:16.374: INFO: Pod pod-secrets-ec694a8f-4e52-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:21:16.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jvknm" for this suite. Feb 13 11:21:22.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:21:22.587: INFO: namespace: e2e-tests-secrets-jvknm, resource: bindings, ignored listing per whitelist Feb 13 11:21:22.897: INFO: namespace e2e-tests-secrets-jvknm deletion completed in 6.515061052s STEP: Destroying namespace "e2e-tests-secret-namespace-v7f7m" for this suite. Feb 13 11:21:28.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:21:29.159: INFO: namespace: e2e-tests-secret-namespace-v7f7m, resource: bindings, ignored listing per whitelist Feb 13 11:21:29.174: INFO: namespace e2e-tests-secret-namespace-v7f7m deletion completed in 6.276772539s • [SLOW TEST:25.712 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:21:29.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 13 11:21:29.466: INFO: Waiting up to 5m0s for pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-5ppbt" to be "success or failure" Feb 13 11:21:29.499: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 32.45916ms Feb 13 11:21:31.519: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052751613s Feb 13 11:21:33.531: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064370799s Feb 13 11:21:35.856: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389445315s Feb 13 11:21:37.879: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412738796s Feb 13 11:21:39.922: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.455507274s STEP: Saw pod success Feb 13 11:21:39.922: INFO: Pod "pod-fb0e394d-4e52-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:21:39.931: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fb0e394d-4e52-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:21:40.541: INFO: Waiting for pod pod-fb0e394d-4e52-11ea-aba9-0242ac110007 to disappear Feb 13 11:21:40.620: INFO: Pod pod-fb0e394d-4e52-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:21:40.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5ppbt" for this suite. Feb 13 11:21:46.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:21:46.906: INFO: namespace: e2e-tests-emptydir-5ppbt, resource: bindings, ignored listing per whitelist Feb 13 11:21:46.932: INFO: namespace e2e-tests-emptydir-5ppbt deletion completed in 6.282319959s • [SLOW TEST:17.758 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:21:46.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-gtqtt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gtqtt to expose endpoints map[] Feb 13 11:21:47.403: INFO: Get endpoints failed (15.132288ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 13 11:21:48.417: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gtqtt exposes endpoints map[] (1.029255549s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-gtqtt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gtqtt to expose endpoints map[pod1:[100]] Feb 13 11:21:52.899: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.446860241s elapsed, will retry) Feb 13 11:21:58.972: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gtqtt exposes endpoints map[pod1:[100]] (10.519287112s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-gtqtt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gtqtt to expose endpoints map[pod1:[100] pod2:[101]] Feb 13 11:22:03.510: INFO: Unexpected endpoints: found map[065e980b-4e53-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.514747966s elapsed, will retry) Feb 13 11:22:09.103: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gtqtt exposes endpoints map[pod1:[100] pod2:[101]] (10.107485597s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-gtqtt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gtqtt to expose endpoints map[pod2:[101]] Feb 13 11:22:10.267: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gtqtt exposes endpoints map[pod2:[101]] (1.157215786s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-gtqtt STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gtqtt to expose endpoints map[] Feb 13 11:22:10.694: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gtqtt exposes endpoints map[] (141.31438ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:22:10.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-gtqtt" for this suite. Feb 13 11:22:34.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:22:35.124: INFO: namespace: e2e-tests-services-gtqtt, resource: bindings, ignored listing per whitelist Feb 13 11:22:35.162: INFO: namespace e2e-tests-services-gtqtt deletion completed in 24.214703237s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:48.230 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:22:35.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:22:35.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-j72vj" for this suite. Feb 13 11:22:41.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:22:41.590: INFO: namespace: e2e-tests-services-j72vj, resource: bindings, ignored listing per whitelist Feb 13 11:22:41.747: INFO: namespace e2e-tests-services-j72vj deletion completed in 6.286774902s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.585 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:22:41.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:22:41.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-4l5gm" to be "success or failure" Feb 13 11:22:41.941: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.524386ms Feb 13 11:22:44.083: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157929439s Feb 13 11:22:46.099: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173926736s Feb 13 11:22:48.353: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42797022s Feb 13 11:22:50.665: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739460181s Feb 13 11:22:52.696: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.770755104s STEP: Saw pod success Feb 13 11:22:52.696: INFO: Pod "downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:22:52.705: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:22:54.873: INFO: Waiting for pod downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007 to disappear Feb 13 11:22:54.880: INFO: Pod downwardapi-volume-263f8c64-4e53-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:22:54.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4l5gm" for this suite. Feb 13 11:23:02.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:23:03.032: INFO: namespace: e2e-tests-downward-api-4l5gm, resource: bindings, ignored listing per whitelist Feb 13 11:23:03.187: INFO: namespace e2e-tests-downward-api-4l5gm deletion completed in 8.288471457s • [SLOW TEST:21.439 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:23:03.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-sh2p STEP: Creating a pod to test atomic-volume-subpath Feb 13 11:23:03.511: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sh2p" in namespace "e2e-tests-subpath-qx66f" to be "success or failure" Feb 13 11:23:03.525: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 13.686147ms Feb 13 11:23:05.770: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258593922s Feb 13 11:23:07.785: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274299173s Feb 13 11:23:10.640: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 7.129257291s Feb 13 11:23:12.662: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 9.150443529s Feb 13 11:23:14.672: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 11.160945971s Feb 13 11:23:16.782: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 13.27098421s Feb 13 11:23:18.819: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 15.308086183s Feb 13 11:23:21.509: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Pending", Reason="", readiness=false. Elapsed: 17.998090056s Feb 13 11:23:23.527: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 20.01551949s Feb 13 11:23:25.545: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 22.033869803s Feb 13 11:23:27.561: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 24.050009676s Feb 13 11:23:29.575: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 26.063762949s Feb 13 11:23:31.589: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 28.077364459s Feb 13 11:23:33.610: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 30.098363717s Feb 13 11:23:35.627: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 32.115444882s Feb 13 11:23:37.672: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Running", Reason="", readiness=false. Elapsed: 34.16081535s Feb 13 11:23:39.689: INFO: Pod "pod-subpath-test-configmap-sh2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.17746504s STEP: Saw pod success Feb 13 11:23:39.689: INFO: Pod "pod-subpath-test-configmap-sh2p" satisfied condition "success or failure" Feb 13 11:23:39.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-sh2p container test-container-subpath-configmap-sh2p: STEP: delete the pod Feb 13 11:23:39.939: INFO: Waiting for pod pod-subpath-test-configmap-sh2p to disappear Feb 13 11:23:39.947: INFO: Pod pod-subpath-test-configmap-sh2p no longer exists STEP: Deleting pod pod-subpath-test-configmap-sh2p Feb 13 11:23:39.947: INFO: Deleting pod "pod-subpath-test-configmap-sh2p" in namespace "e2e-tests-subpath-qx66f" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:23:39.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qx66f" for this suite. Feb 13 11:23:46.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:23:46.311: INFO: namespace: e2e-tests-subpath-qx66f, resource: bindings, ignored listing per whitelist Feb 13 11:23:46.328: INFO: namespace e2e-tests-subpath-qx66f deletion completed in 6.36890169s • [SLOW TEST:43.140 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:23:46.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 11:23:46.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p4db6' Feb 13 11:23:50.926: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 11:23:50.927: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 13 11:23:50.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-p4db6' Feb 13 11:23:51.427: INFO: stderr: "" Feb 13 11:23:51.427: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:23:51.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p4db6" for this suite. Feb 13 11:24:15.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:24:15.721: INFO: namespace: e2e-tests-kubectl-p4db6, resource: bindings, ignored listing per whitelist Feb 13 11:24:15.721: INFO: namespace e2e-tests-kubectl-p4db6 deletion completed in 24.280817218s • [SLOW TEST:29.392 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:24:15.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:24:15.914: INFO: Creating ReplicaSet my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007 Feb 13 11:24:15.952: INFO: Pod name my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007: Found 0 pods out of 1 Feb 13 11:24:20.966: INFO: Pod name my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007: Found 1 pods out of 1 Feb 13 11:24:20.966: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007" is running Feb 13 11:24:27.022: INFO: Pod "my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007-mkv29" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:24:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:24:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:24:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 11:24:16 +0000 UTC Reason: Message:}]) Feb 13 11:24:27.023: INFO: Trying to dial the pod Feb 13 11:24:32.077: INFO: Controller my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007: Got expected result from replica 1 [my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007-mkv29]: "my-hostname-basic-5e4673f8-4e53-11ea-aba9-0242ac110007-mkv29", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:24:32.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-k5psf" for this suite. Feb 13 11:24:38.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:24:38.295: INFO: namespace: e2e-tests-replicaset-k5psf, resource: bindings, ignored listing per whitelist Feb 13 11:24:38.371: INFO: namespace e2e-tests-replicaset-k5psf deletion completed in 6.275047109s • [SLOW TEST:22.650 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:24:38.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:24:38.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-c7j2k" to be "success or failure" Feb 13 11:24:38.726: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.551985ms Feb 13 11:24:41.037: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342730201s Feb 13 11:24:43.775: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.081072945s Feb 13 11:24:45.791: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.097069952s Feb 13 11:24:47.823: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.128726357s Feb 13 11:24:49.838: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 11.143806524s Feb 13 11:24:51.994: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.29970852s STEP: Saw pod success Feb 13 11:24:51.994: INFO: Pod "downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:24:52.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:24:52.768: INFO: Waiting for pod downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007 to disappear Feb 13 11:24:53.037: INFO: Pod downwardapi-volume-6bd88a17-4e53-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:24:53.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c7j2k" for this suite. Feb 13 11:24:59.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:24:59.159: INFO: namespace: e2e-tests-projected-c7j2k, resource: bindings, ignored listing per whitelist Feb 13 11:24:59.286: INFO: namespace e2e-tests-projected-c7j2k deletion completed in 6.22722945s • [SLOW TEST:20.915 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:24:59.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:24:59.656: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 13 11:24:59.688: INFO: Number of nodes with available pods: 0 Feb 13 11:24:59.688: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 13 11:24:59.839: INFO: Number of nodes with available pods: 0 Feb 13 11:24:59.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:00.858: INFO: Number of nodes with available pods: 0 Feb 13 11:25:00.858: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:02.280: INFO: Number of nodes with available pods: 0 Feb 13 11:25:02.280: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:02.852: INFO: Number of nodes with available pods: 0 Feb 13 11:25:02.852: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:03.859: INFO: Number of nodes with available pods: 0 Feb 13 11:25:03.859: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:05.934: INFO: Number of nodes with available pods: 0 Feb 13 11:25:05.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:06.973: INFO: Number of nodes with available pods: 0 Feb 13 11:25:06.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:07.884: INFO: Number of nodes with available pods: 0 Feb 13 11:25:07.884: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:08.849: INFO: Number of nodes with available pods: 0 Feb 13 11:25:08.849: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:09.868: INFO: Number of nodes with available pods: 1 Feb 13 11:25:09.868: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 13 11:25:09.935: INFO: Number of nodes with available pods: 1 Feb 13 11:25:09.935: INFO: Number of running nodes: 0, number of available pods: 1 Feb 13 11:25:11.122: INFO: Number of nodes with available pods: 0 Feb 13 11:25:11.123: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 13 11:25:11.314: INFO: Number of nodes with available pods: 0 Feb 13 11:25:11.314: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:12.500: INFO: Number of nodes with available pods: 0 Feb 13 11:25:12.500: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:13.326: INFO: Number of nodes with available pods: 0 Feb 13 11:25:13.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:14.752: INFO: Number of nodes with available pods: 0 Feb 13 11:25:14.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:15.335: INFO: Number of nodes with available pods: 0 Feb 13 11:25:15.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:16.324: INFO: Number of nodes with available pods: 0 Feb 13 11:25:16.324: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:17.434: INFO: Number of nodes with available pods: 0 Feb 13 11:25:17.435: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:18.332: INFO: Number of nodes with available pods: 0 Feb 13 11:25:18.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:19.557: INFO: Number of nodes with available pods: 0 Feb 13 11:25:19.557: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:20.333: INFO: Number of nodes with available pods: 0 Feb 13 11:25:20.333: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:21.326: INFO: Number of nodes with available pods: 0 Feb 13 11:25:21.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:23.048: INFO: Number of nodes with available pods: 0 Feb 13 11:25:23.049: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:23.597: INFO: Number of nodes with available pods: 0 Feb 13 11:25:23.598: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:24.327: INFO: Number of nodes with available pods: 0 Feb 13 11:25:24.327: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:25.326: INFO: Number of nodes with available pods: 0 Feb 13 11:25:25.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:26.332: INFO: Number of nodes with available pods: 0 Feb 13 11:25:26.332: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:25:27.331: INFO: Number of nodes with available pods: 1 Feb 13 11:25:27.331: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bhchb, will wait for the garbage collector to delete the pods Feb 13 11:25:27.441: INFO: Deleting DaemonSet.extensions daemon-set took: 42.864646ms Feb 13 11:25:27.541: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.427538ms Feb 13 11:25:42.684: INFO: Number of nodes with available pods: 0 Feb 13 11:25:42.684: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 11:25:42.691: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bhchb/daemonsets","resourceVersion":"21524349"},"items":null} Feb 13 11:25:42.694: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bhchb/pods","resourceVersion":"21524349"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:25:42.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bhchb" for this suite. Feb 13 11:25:48.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:25:48.795: INFO: namespace: e2e-tests-daemonsets-bhchb, resource: bindings, ignored listing per whitelist Feb 13 11:25:48.897: INFO: namespace e2e-tests-daemonsets-bhchb deletion completed in 6.149383732s • [SLOW TEST:49.610 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:25:48.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:25:59.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zn59m" for this suite. Feb 13 11:26:43.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:26:43.382: INFO: namespace: e2e-tests-kubelet-test-zn59m, resource: bindings, ignored listing per whitelist Feb 13 11:26:43.430: INFO: namespace e2e-tests-kubelet-test-zn59m deletion completed in 44.204032987s • [SLOW TEST:54.533 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:26:43.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-b65663a2-4e53-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 11:26:43.693: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-hmkqh" to be "success or failure" Feb 13 11:26:43.703: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015444ms Feb 13 11:26:45.861: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167814005s Feb 13 11:26:47.889: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196247966s Feb 13 11:26:49.904: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210777547s Feb 13 11:26:52.152: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45865143s Feb 13 11:26:54.166: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.473317775s STEP: Saw pod success Feb 13 11:26:54.167: INFO: Pod "pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:26:54.173: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 11:26:54.369: INFO: Waiting for pod pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007 to disappear Feb 13 11:26:54.668: INFO: Pod pod-projected-secrets-b657f613-4e53-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:26:54.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hmkqh" for this suite. Feb 13 11:27:00.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:27:00.851: INFO: namespace: e2e-tests-projected-hmkqh, resource: bindings, ignored listing per whitelist Feb 13 11:27:00.917: INFO: namespace e2e-tests-projected-hmkqh deletion completed in 6.232926697s • [SLOW TEST:17.487 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:27:00.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:27:01.194: INFO: Creating deployment "nginx-deployment" Feb 13 11:27:01.372: INFO: Waiting for observed generation 1 Feb 13 11:27:04.492: INFO: Waiting for all required pods to come up Feb 13 11:27:06.253: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 13 11:27:46.735: INFO: Waiting for deployment "nginx-deployment" to complete Feb 13 11:27:46.773: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 13 11:27:46.824: INFO: Updating deployment nginx-deployment Feb 13 11:27:46.824: INFO: Waiting for observed generation 2 Feb 13 11:27:50.763: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 13 11:27:50.784: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 13 11:27:51.309: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 13 11:27:51.344: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 13 11:27:51.344: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 13 11:27:51.627: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 13 11:27:51.701: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 13 11:27:51.701: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 13 11:27:53.888: INFO: Updating deployment nginx-deployment Feb 13 11:27:53.889: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 13 11:27:54.977: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 13 11:27:55.050: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 13 11:27:55.159: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vdcv7/deployments/nginx-deployment,UID:c0ca939b-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524737,Generation:3,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-02-13 11:27:48 +0000 UTC 2020-02-13 11:27:01 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-13 11:27:55 +0000 UTC 2020-02-13 11:27:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 13 11:27:55.171: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vdcv7/replicasets/nginx-deployment-5c98f8fb5,UID:dbfe8e49-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524735,Generation:3,CreationTimestamp:2020-02-13 11:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c0ca939b-4e53-11ea-a994-fa163e34d433 0xc000802347 0xc000802348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 13 11:27:55.171: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 13 11:27:55.172: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vdcv7/replicasets/nginx-deployment-85ddf47c5d,UID:c0e88820-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524734,Generation:3,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c0ca939b-4e53-11ea-a994-fa163e34d433 0xc000802467 0xc000802468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 13 11:27:55.675: INFO: Pod "nginx-deployment-5c98f8fb5-4gnf9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4gnf9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-5c98f8fb5-4gnf9,UID:dc668e85-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524727,Generation:0,CreationTimestamp:2020-02-13 11:27:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dbfe8e49-4e53-11ea-a994-fa163e34d433 0xc000803a07 0xc000803a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000803a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000803ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-13 11:27:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.675: INFO: Pod "nginx-deployment-5c98f8fb5-gj7rq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gj7rq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-5c98f8fb5-gj7rq,UID:dc0d2f5b-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524724,Generation:0,CreationTimestamp:2020-02-13 11:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dbfe8e49-4e53-11ea-a994-fa163e34d433 0xc000803ee7 0xc000803ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-13 11:27:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.675: INFO: Pod "nginx-deployment-5c98f8fb5-kd9bk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kd9bk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-5c98f8fb5-kd9bk,UID:dc0d286e-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524709,Generation:0,CreationTimestamp:2020-02-13 11:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dbfe8e49-4e53-11ea-a994-fa163e34d433 0xc002304147 0xc002304148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-13 11:27:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.676: INFO: Pod "nginx-deployment-5c98f8fb5-lqkzb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lqkzb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-5c98f8fb5-lqkzb,UID:dc08d215-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524722,Generation:0,CreationTimestamp:2020-02-13 11:27:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dbfe8e49-4e53-11ea-a994-fa163e34d433 0xc002304447 0xc002304448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-13 11:27:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.676: INFO: Pod "nginx-deployment-5c98f8fb5-tk8pn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tk8pn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-5c98f8fb5-tk8pn,UID:dc5dccdd-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524726,Generation:0,CreationTimestamp:2020-02-13 11:27:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 dbfe8e49-4e53-11ea-a994-fa163e34d433 0xc0023045f7 0xc0023045f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023046b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-13 11:27:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.676: INFO: Pod "nginx-deployment-85ddf47c5d-2rjnb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2rjnb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-2rjnb,UID:c11d0da2-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524659,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002304777 0xc002304778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023047e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-13 11:27:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b89bb79faf323d11f0097f69bfd77bd1d5c725e282a9941f3bfae31d8a2fc2b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.676: INFO: Pod "nginx-deployment-85ddf47c5d-4px6c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4px6c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-4px6c,UID:c0f10563-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524637,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc0023048f7 0xc0023048f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304960} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-13 11:27:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e555ba123c9b8bca64cc4dd992ed216afad748b0c0368c53943fd9dc64082df8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.677: INFO: Pod "nginx-deployment-85ddf47c5d-5tdjf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5tdjf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-5tdjf,UID:c11454c5-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524646,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002304ac7 0xc002304ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304b30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-13 11:27:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dcd72c4fb37e47e403a139e7b24fb55772bbb9861e6b303f3aee46a2e2ef29a8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.677: INFO: Pod "nginx-deployment-85ddf47c5d-7zx67" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7zx67,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-7zx67,UID:c1149bc3-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524666,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002304c87 0xc002304c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-13 11:27:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1a7feaaec716c2178c7437870329a455d429d69e4ebecc71ded5b4d4531d9afd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.677: INFO: Pod "nginx-deployment-85ddf47c5d-84x59" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-84x59,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-84x59,UID:c114ccaf-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524670,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002304e67 0xc002304e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002304ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002304ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-13 11:27:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e90d1ade1d484d82dab4fa4365afbebf356badbc33d25505c2b090a32729b992}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.677: INFO: Pod "nginx-deployment-85ddf47c5d-bz86p" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bz86p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-bz86p,UID:c11d5d9e-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524654,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002305037 0xc002305038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023050a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023050c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-13 11:27:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f32216fff526a2a94564fdfb737f82cba0d518db80761ca2581e4b23e3fe1f65}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.677: INFO: Pod "nginx-deployment-85ddf47c5d-fr4vx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fr4vx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-fr4vx,UID:e0f67fc4-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524740,Generation:0,CreationTimestamp:2020-02-13 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002305237 0xc002305238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023052a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023052c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.678: INFO: Pod "nginx-deployment-85ddf47c5d-ncrnl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ncrnl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-ncrnl,UID:c10fa435-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524639,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc002305320 0xc002305321}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023054c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023054e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-13 11:27:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4348ebb2a094f76792b9d1b586ce759622d4a8b74b273b9d13f06715311a264e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 13 11:27:55.678: INFO: Pod "nginx-deployment-85ddf47c5d-nk57k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nk57k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vdcv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vdcv7/pods/nginx-deployment-85ddf47c5d-nk57k,UID:c114a850-4e53-11ea-a994-fa163e34d433,ResourceVersion:21524642,Generation:0,CreationTimestamp:2020-02-13 11:27:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0e88820-4e53-11ea-a994-fa163e34d433 0xc0023055a7 0xc0023055a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9hlc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9hlc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9hlc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023056a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023056c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:02 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:27:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-13 11:27:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-13 11:27:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://60486820fd28153124b58ebf6568654830169c1b405a6e707de25f3442c46cde}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:27:55.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vdcv7" for this suite. Feb 13 11:28:25.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:28:26.294: INFO: namespace: e2e-tests-deployment-vdcv7, resource: bindings, ignored listing per whitelist Feb 13 11:28:26.299: INFO: namespace e2e-tests-deployment-vdcv7 deletion completed in 29.77890783s • [SLOW TEST:85.382 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:28:26.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 13 11:28:28.680: INFO: Waiting up to 5m0s for pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-n9nxw" to be "success or failure" Feb 13 11:28:28.709: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 28.315888ms Feb 13 11:28:31.681: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.000057178s Feb 13 11:28:34.141: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.460134933s Feb 13 11:28:36.535: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.854689631s Feb 13 11:28:38.557: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.876301615s Feb 13 11:28:40.583: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.901956414s Feb 13 11:28:42.619: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.938615251s Feb 13 11:28:44.632: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.951186259s Feb 13 11:28:46.651: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.970424512s Feb 13 11:28:48.672: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.991421612s Feb 13 11:28:51.636: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.955646249s Feb 13 11:28:53.661: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 24.980322573s Feb 13 11:28:55.684: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.002978312s STEP: Saw pod success Feb 13 11:28:55.684: INFO: Pod "pod-f4eaf00f-4e53-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:28:55.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f4eaf00f-4e53-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 11:28:56.523: INFO: Waiting for pod pod-f4eaf00f-4e53-11ea-aba9-0242ac110007 to disappear Feb 13 11:28:56.644: INFO: Pod pod-f4eaf00f-4e53-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:28:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n9nxw" for this suite. Feb 13 11:29:02.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:29:02.789: INFO: namespace: e2e-tests-emptydir-n9nxw, resource: bindings, ignored listing per whitelist Feb 13 11:29:02.908: INFO: namespace e2e-tests-emptydir-n9nxw deletion completed in 6.241695157s • [SLOW TEST:36.608 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:29:02.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 13 11:29:13.211: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0973d889-4e54-11ea-aba9-0242ac110007,GenerateName:,Namespace:e2e-tests-events-7wqs7,SelfLink:/api/v1/namespaces/e2e-tests-events-7wqs7/pods/send-events-0973d889-4e54-11ea-aba9-0242ac110007,UID:09756787-4e54-11ea-a994-fa163e34d433,ResourceVersion:21525181,Generation:0,CreationTimestamp:2020-02-13 11:29:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 102802502,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4wr6w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4wr6w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4wr6w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b23f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b23fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:29:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:29:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:29:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 11:29:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-13 11:29:03 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-13 11:29:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://dc005e2cda3d1ea727daee2789e2d19322ad551b90c0a91c7ff4e226992e2bc5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 13 11:29:15.227: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 13 11:29:17.250: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:29:17.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-7wqs7" for this suite. Feb 13 11:30:03.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:30:03.485: INFO: namespace: e2e-tests-events-7wqs7, resource: bindings, ignored listing per whitelist Feb 13 11:30:03.616: INFO: namespace e2e-tests-events-7wqs7 deletion completed in 46.23047459s • [SLOW TEST:60.708 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:30:03.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 13 11:30:14.589: INFO: Successfully updated pod "pod-update-2dac57e0-4e54-11ea-aba9-0242ac110007" STEP: verifying the updated pod is in kubernetes Feb 13 11:30:14.628: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:30:14.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mpbfs" for this suite. Feb 13 11:30:36.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:30:36.779: INFO: namespace: e2e-tests-pods-mpbfs, resource: bindings, ignored listing per whitelist Feb 13 11:30:36.837: INFO: namespace e2e-tests-pods-mpbfs deletion completed in 22.199608044s • [SLOW TEST:33.220 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:30:36.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:30:43.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-hm5ld" for this suite. Feb 13 11:30:50.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:30:50.679: INFO: namespace: e2e-tests-namespaces-hm5ld, resource: bindings, ignored listing per whitelist Feb 13 11:30:50.724: INFO: namespace e2e-tests-namespaces-hm5ld deletion completed in 6.975968009s STEP: Destroying namespace "e2e-tests-nsdeletetest-l5f9b" for this suite. Feb 13 11:30:50.748: INFO: Namespace e2e-tests-nsdeletetest-l5f9b was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qkr8j" for this suite. Feb 13 11:30:56.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:30:56.876: INFO: namespace: e2e-tests-nsdeletetest-qkr8j, resource: bindings, ignored listing per whitelist Feb 13 11:30:56.938: INFO: namespace e2e-tests-nsdeletetest-qkr8j deletion completed in 6.190320353s • [SLOW TEST:20.102 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:30:56.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-4d6e17e9-4e54-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:30:57.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-tvhlr" to be "success or failure" Feb 13 11:30:57.201: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.632795ms Feb 13 11:30:59.313: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121517649s Feb 13 11:31:01.331: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13984416s Feb 13 11:31:03.956: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764325946s Feb 13 11:31:05.977: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.785311475s Feb 13 11:31:07.993: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.801256148s Feb 13 11:31:10.943: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.751494363s STEP: Saw pod success Feb 13 11:31:10.943: INFO: Pod "pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:31:10.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 11:31:11.463: INFO: Waiting for pod pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007 to disappear Feb 13 11:31:11.482: INFO: Pod pod-configmaps-4d6f4975-4e54-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:31:11.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tvhlr" for this suite. Feb 13 11:31:18.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:31:18.394: INFO: namespace: e2e-tests-configmap-tvhlr, resource: bindings, ignored listing per whitelist Feb 13 11:31:18.406: INFO: namespace e2e-tests-configmap-tvhlr deletion completed in 6.213047833s • [SLOW TEST:21.468 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:31:18.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 13 11:34:22.254: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:22.355: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:24.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:24.370: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:26.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:26.375: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:28.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:28.380: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:30.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:30.373: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:32.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:32.373: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:34.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:34.370: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:36.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:36.379: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:38.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:38.377: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:40.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:40.393: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:42.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:42.375: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:44.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:44.378: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:46.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:46.383: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:48.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:48.371: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:50.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:50.379: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:52.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:52.373: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:54.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:54.369: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:56.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:56.376: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:34:58.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:34:58.369: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:00.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:00.548: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:02.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:02.373: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:04.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:04.372: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:06.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:06.368: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:08.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:08.392: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:10.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:10.369: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:12.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:12.426: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:14.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:14.381: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:16.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:16.378: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:18.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:18.375: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:20.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:20.375: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:22.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:22.376: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:24.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:24.380: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:26.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:26.375: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:28.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:28.376: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:30.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:30.380: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:32.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:32.402: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:34.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:34.371: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:36.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:36.376: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:38.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:38.374: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:40.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:40.374: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:42.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:42.373: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:44.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:44.381: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:46.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:46.380: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:48.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:48.375: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:50.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:50.373: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:52.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:52.372: INFO: Pod pod-with-poststart-exec-hook still exists Feb 13 11:35:54.356: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 13 11:35:54.370: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:35:54.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-s6sfd" for this suite. Feb 13 11:36:18.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:36:18.756: INFO: namespace: e2e-tests-container-lifecycle-hook-s6sfd, resource: bindings, ignored listing per whitelist Feb 13 11:36:18.782: INFO: namespace e2e-tests-container-lifecycle-hook-s6sfd deletion completed in 24.399311726s • [SLOW TEST:300.376 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:36:18.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-msrgn STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-msrgn to expose endpoints map[] Feb 13 11:36:19.343: INFO: Get endpoints failed (6.642044ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 13 11:36:20.362: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-msrgn exposes endpoints map[] (1.025641002s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-msrgn STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-msrgn to expose endpoints map[pod1:[80]] Feb 13 11:36:24.622: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.23108872s elapsed, will retry) Feb 13 11:36:29.144: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-msrgn exposes endpoints map[pod1:[80]] (8.752508204s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-msrgn STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-msrgn to expose endpoints map[pod1:[80] pod2:[80]] Feb 13 11:36:33.635: INFO: Unexpected endpoints: found map[0e174164-4e55-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.466459339s elapsed, will retry) Feb 13 11:36:40.835: INFO: Unexpected endpoints: found map[0e174164-4e55-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (11.666669206s elapsed, will retry) Feb 13 11:36:41.880: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-msrgn exposes endpoints map[pod2:[80] pod1:[80]] (12.712099041s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-msrgn STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-msrgn to expose endpoints map[pod2:[80]] Feb 13 11:36:43.372: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-msrgn exposes endpoints map[pod2:[80]] (1.445237264s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-msrgn STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-msrgn to expose endpoints map[] Feb 13 11:36:44.665: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-msrgn exposes endpoints map[] (1.273948699s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:36:44.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-msrgn" for this suite. Feb 13 11:37:08.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:37:08.964: INFO: namespace: e2e-tests-services-msrgn, resource: bindings, ignored listing per whitelist Feb 13 11:37:09.060: INFO: namespace e2e-tests-services-msrgn deletion completed in 24.202811038s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:50.276 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:37:09.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:37:09.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-x7hpn" for this suite. Feb 13 11:37:16.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:37:16.226: INFO: namespace: e2e-tests-kubelet-test-x7hpn, resource: bindings, ignored listing per whitelist Feb 13 11:37:16.402: INFO: namespace e2e-tests-kubelet-test-x7hpn deletion completed in 6.666153187s • [SLOW TEST:7.342 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:37:16.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Feb 13 11:37:16.764: INFO: Waiting up to 5m0s for pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007" in namespace "e2e-tests-var-expansion-ftzn5" to be "success or failure" Feb 13 11:37:16.780: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.693377ms Feb 13 11:37:18.996: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231387939s Feb 13 11:37:21.438: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.673174434s Feb 13 11:37:23.857: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.092694011s Feb 13 11:37:25.916: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.151423651s Feb 13 11:37:27.928: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.1636815s STEP: Saw pod success Feb 13 11:37:27.928: INFO: Pod "var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:37:28.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007 container dapi-container: STEP: delete the pod Feb 13 11:37:28.983: INFO: Waiting for pod var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007 to disappear Feb 13 11:37:28.991: INFO: Pod var-expansion-2fa602f6-4e55-11ea-aba9-0242ac110007 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:37:28.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-ftzn5" for this suite. Feb 13 11:37:35.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:37:35.045: INFO: namespace: e2e-tests-var-expansion-ftzn5, resource: bindings, ignored listing per whitelist Feb 13 11:37:35.164: INFO: namespace e2e-tests-var-expansion-ftzn5 deletion completed in 6.161746703s • [SLOW TEST:18.762 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:37:35.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:38:35.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-gthn9" for this suite. Feb 13 11:38:43.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:38:43.310: INFO: namespace: e2e-tests-container-runtime-gthn9, resource: bindings, ignored listing per whitelist Feb 13 11:38:43.432: INFO: namespace e2e-tests-container-runtime-gthn9 deletion completed in 8.268305364s • [SLOW TEST:68.267 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:38:43.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-d749 STEP: Creating a pod to test atomic-volume-subpath Feb 13 11:38:43.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-d749" in namespace "e2e-tests-subpath-crmdb" to be "success or failure" Feb 13 11:38:43.879: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 71.26144ms Feb 13 11:38:46.061: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252393502s Feb 13 11:38:48.080: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272140928s Feb 13 11:38:50.372: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 6.564225321s Feb 13 11:38:52.392: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 8.584164708s Feb 13 11:38:56.767: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 12.95920513s Feb 13 11:38:58.780: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 14.972039581s Feb 13 11:39:00.796: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 16.987501545s Feb 13 11:39:02.826: INFO: Pod "pod-subpath-test-secret-d749": Phase="Pending", Reason="", readiness=false. Elapsed: 19.01816488s Feb 13 11:39:04.839: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 21.030556785s Feb 13 11:39:06.858: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 23.050363751s Feb 13 11:39:08.873: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 25.064500509s Feb 13 11:39:10.889: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 27.081133696s Feb 13 11:39:12.910: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 29.101732621s Feb 13 11:39:14.927: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 31.118883096s Feb 13 11:39:16.946: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 33.137799888s Feb 13 11:39:18.961: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 35.153348591s Feb 13 11:39:20.990: INFO: Pod "pod-subpath-test-secret-d749": Phase="Running", Reason="", readiness=false. Elapsed: 37.182227562s Feb 13 11:39:23.012: INFO: Pod "pod-subpath-test-secret-d749": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.20340715s STEP: Saw pod success Feb 13 11:39:23.012: INFO: Pod "pod-subpath-test-secret-d749" satisfied condition "success or failure" Feb 13 11:39:23.020: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-d749 container test-container-subpath-secret-d749: STEP: delete the pod Feb 13 11:39:23.153: INFO: Waiting for pod pod-subpath-test-secret-d749 to disappear Feb 13 11:39:23.171: INFO: Pod pod-subpath-test-secret-d749 no longer exists STEP: Deleting pod pod-subpath-test-secret-d749 Feb 13 11:39:23.171: INFO: Deleting pod "pod-subpath-test-secret-d749" in namespace "e2e-tests-subpath-crmdb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:39:23.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-crmdb" for this suite. Feb 13 11:39:29.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:39:29.390: INFO: namespace: e2e-tests-subpath-crmdb, resource: bindings, ignored listing per whitelist Feb 13 11:39:29.473: INFO: namespace e2e-tests-subpath-crmdb deletion completed in 6.24148174s • [SLOW TEST:46.041 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:39:29.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7ee8fc51-4e55-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:39:29.707: INFO: Waiting up to 5m0s for pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-8ksvc" to be "success or failure" Feb 13 11:39:29.711: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53216ms Feb 13 11:39:31.728: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020693657s Feb 13 11:39:33.745: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038384743s Feb 13 11:39:35.892: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185467829s Feb 13 11:39:38.341: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633584595s Feb 13 11:39:40.360: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.652899366s STEP: Saw pod success Feb 13 11:39:40.360: INFO: Pod "pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:39:40.366: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 11:39:40.489: INFO: Waiting for pod pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007 to disappear Feb 13 11:39:41.642: INFO: Pod pod-configmaps-7eecbc81-4e55-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:39:41.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8ksvc" for this suite. Feb 13 11:39:48.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:39:48.174: INFO: namespace: e2e-tests-configmap-8ksvc, resource: bindings, ignored listing per whitelist Feb 13 11:39:48.192: INFO: namespace e2e-tests-configmap-8ksvc deletion completed in 6.533818621s • [SLOW TEST:18.718 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:39:48.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:39:48.739: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8a400231-4e55-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0024e6a4a), BlockOwnerDeletion:(*bool)(0xc0024e6a4b)}} Feb 13 11:39:48.912: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8a25dfba-4e55-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0024e6c2a), BlockOwnerDeletion:(*bool)(0xc0024e6c2b)}} Feb 13 11:39:48.951: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8a2c7e01-4e55-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0021e1d6a), BlockOwnerDeletion:(*bool)(0xc0021e1d6b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:39:54.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-l7hhv" for this suite. Feb 13 11:40:00.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:40:00.220: INFO: namespace: e2e-tests-gc-l7hhv, resource: bindings, ignored listing per whitelist Feb 13 11:40:00.307: INFO: namespace e2e-tests-gc-l7hhv deletion completed in 6.277620314s • [SLOW TEST:12.115 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:40:00.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-915f831c-4e55-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:40:00.683: INFO: Waiting up to 5m0s for pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-4wgzd" to be "success or failure" Feb 13 11:40:00.697: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.950093ms Feb 13 11:40:02.767: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083525526s Feb 13 11:40:04.799: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116227611s Feb 13 11:40:07.032: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349124435s Feb 13 11:40:09.067: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38342989s Feb 13 11:40:11.085: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.401680925s STEP: Saw pod success Feb 13 11:40:11.085: INFO: Pod "pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:40:11.090: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 11:40:11.250: INFO: Waiting for pod pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007 to disappear Feb 13 11:40:11.256: INFO: Pod pod-configmaps-916237b0-4e55-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:40:11.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4wgzd" for this suite. Feb 13 11:40:18.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:40:18.264: INFO: namespace: e2e-tests-configmap-4wgzd, resource: bindings, ignored listing per whitelist Feb 13 11:40:18.292: INFO: namespace e2e-tests-configmap-4wgzd deletion completed in 7.028539265s • [SLOW TEST:17.985 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:40:18.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9c093320-4e55-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 11:40:18.593: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-njzt9" to be "success or failure" Feb 13 11:40:18.664: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 70.918676ms Feb 13 11:40:20.740: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146241957s Feb 13 11:40:22.759: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165770937s Feb 13 11:40:24.977: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.383754342s Feb 13 11:40:27.044: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450299059s Feb 13 11:40:29.143: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.549279309s STEP: Saw pod success Feb 13 11:40:29.143: INFO: Pod "pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:40:29.148: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 13 11:40:29.554: INFO: Waiting for pod pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007 to disappear Feb 13 11:40:29.931: INFO: Pod pod-projected-secrets-9c0dfec8-4e55-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:40:29.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-njzt9" for this suite. Feb 13 11:40:36.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:40:36.645: INFO: namespace: e2e-tests-projected-njzt9, resource: bindings, ignored listing per whitelist Feb 13 11:40:36.714: INFO: namespace e2e-tests-projected-njzt9 deletion completed in 6.766922618s • [SLOW TEST:18.422 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:40:36.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 13 11:40:36.967: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 13 11:40:36.978: INFO: Waiting for terminating namespaces to be deleted... Feb 13 11:40:36.981: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 13 11:40:36.994: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 11:40:36.994: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 11:40:36.994: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 11:40:36.994: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 13 11:40:36.994: INFO: Container coredns ready: true, restart count 0 Feb 13 11:40:36.994: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 13 11:40:36.995: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 11:40:36.995: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 11:40:36.995: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 13 11:40:36.995: INFO: Container weave ready: true, restart count 0 Feb 13 11:40:36.995: INFO: Container weave-npc ready: true, restart count 0 Feb 13 11:40:36.995: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 13 11:40:36.995: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ad1e6514-4e55-11ea-aba9-0242ac110007 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ad1e6514-4e55-11ea-aba9-0242ac110007 off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label kubernetes.io/e2e-ad1e6514-4e55-11ea-aba9-0242ac110007 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:40:59.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-69t94" for this suite. Feb 13 11:41:13.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:41:14.039: INFO: namespace: e2e-tests-sched-pred-69t94, resource: bindings, ignored listing per whitelist Feb 13 11:41:14.126: INFO: namespace e2e-tests-sched-pred-69t94 deletion completed in 14.441568132s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:37.411 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:41:14.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:41:44.411: INFO: Container started at 2020-02-13 11:41:22 +0000 UTC, pod became ready at 2020-02-13 11:41:42 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:41:44.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-87l46" for this suite. Feb 13 11:42:08.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:42:08.692: INFO: namespace: e2e-tests-container-probe-87l46, resource: bindings, ignored listing per whitelist Feb 13 11:42:08.730: INFO: namespace e2e-tests-container-probe-87l46 deletion completed in 24.313081682s • [SLOW TEST:54.602 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:42:08.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 13 11:42:08.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-p57dj run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 13 11:42:22.717: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0213 11:42:21.313657 1428 log.go:172] (0xc00055e420) (0xc0007d46e0) Create stream\nI0213 11:42:21.313875 1428 log.go:172] (0xc00055e420) (0xc0007d46e0) Stream added, broadcasting: 1\nI0213 11:42:21.319149 1428 log.go:172] (0xc00055e420) Reply frame received for 1\nI0213 11:42:21.319210 1428 log.go:172] (0xc00055e420) (0xc0008b2000) Create stream\nI0213 11:42:21.319222 1428 log.go:172] (0xc00055e420) (0xc0008b2000) Stream added, broadcasting: 3\nI0213 11:42:21.320448 1428 log.go:172] (0xc00055e420) Reply frame received for 3\nI0213 11:42:21.320478 1428 log.go:172] (0xc00055e420) (0xc000601680) Create stream\nI0213 11:42:21.320486 1428 log.go:172] (0xc00055e420) (0xc000601680) Stream added, broadcasting: 5\nI0213 11:42:21.322481 1428 log.go:172] (0xc00055e420) Reply frame received for 5\nI0213 11:42:21.322513 1428 log.go:172] (0xc00055e420) (0xc0008b20a0) Create stream\nI0213 11:42:21.322519 1428 log.go:172] (0xc00055e420) (0xc0008b20a0) Stream added, broadcasting: 7\nI0213 11:42:21.324324 1428 log.go:172] (0xc00055e420) Reply frame received for 7\nI0213 11:42:21.324656 1428 log.go:172] (0xc0008b2000) (3) Writing data frame\nI0213 11:42:21.324977 1428 log.go:172] (0xc0008b2000) (3) Writing data frame\nI0213 11:42:21.331326 1428 log.go:172] (0xc00055e420) Data frame received for 5\nI0213 11:42:21.331365 1428 log.go:172] (0xc000601680) (5) Data frame handling\nI0213 11:42:21.331412 1428 log.go:172] (0xc000601680) (5) Data frame sent\nI0213 11:42:21.334383 1428 log.go:172] (0xc00055e420) Data frame received for 5\nI0213 11:42:21.334404 1428 log.go:172] (0xc000601680) (5) Data frame handling\nI0213 11:42:21.334421 1428 log.go:172] (0xc000601680) (5) Data frame sent\nI0213 11:42:22.599834 1428 log.go:172] (0xc00055e420) (0xc0008b2000) Stream removed, broadcasting: 3\nI0213 11:42:22.600287 1428 log.go:172] (0xc00055e420) Data frame received for 1\nI0213 11:42:22.600370 1428 log.go:172] (0xc0007d46e0) (1) Data frame handling\nI0213 11:42:22.600431 1428 log.go:172] (0xc0007d46e0) (1) Data frame sent\nI0213 11:42:22.600509 1428 log.go:172] (0xc00055e420) (0xc0007d46e0) Stream removed, broadcasting: 1\nI0213 11:42:22.600591 1428 log.go:172] (0xc00055e420) (0xc000601680) Stream removed, broadcasting: 5\nI0213 11:42:22.600769 1428 log.go:172] (0xc00055e420) (0xc0008b20a0) Stream removed, broadcasting: 7\nI0213 11:42:22.600823 1428 log.go:172] (0xc00055e420) Go away received\nI0213 11:42:22.601209 1428 log.go:172] (0xc00055e420) (0xc0007d46e0) Stream removed, broadcasting: 1\nI0213 11:42:22.601262 1428 log.go:172] (0xc00055e420) (0xc0008b2000) Stream removed, broadcasting: 3\nI0213 11:42:22.601280 1428 log.go:172] (0xc00055e420) (0xc000601680) Stream removed, broadcasting: 5\nI0213 11:42:22.601299 1428 log.go:172] (0xc00055e420) (0xc0008b20a0) Stream removed, broadcasting: 7\n" Feb 13 11:42:22.718: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:42:24.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-p57dj" for this suite. Feb 13 11:42:31.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:42:31.702: INFO: namespace: e2e-tests-kubectl-p57dj, resource: bindings, ignored listing per whitelist Feb 13 11:42:31.709: INFO: namespace e2e-tests-kubectl-p57dj deletion completed in 6.96275435s • [SLOW TEST:22.978 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:42:31.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Feb 13 11:42:31.982: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-65p5l" to be "success or failure" Feb 13 11:42:31.999: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.029883ms Feb 13 11:42:34.029: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047380691s Feb 13 11:42:36.051: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068938224s Feb 13 11:42:38.139: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157192763s Feb 13 11:42:40.983: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.001066735s Feb 13 11:42:43.010: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.027807835s Feb 13 11:42:45.018: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.036292785s STEP: Saw pod success Feb 13 11:42:45.018: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 13 11:42:45.022: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 13 11:42:45.870: INFO: Waiting for pod pod-host-path-test to disappear Feb 13 11:42:46.186: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:42:46.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-65p5l" for this suite. Feb 13 11:42:52.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:42:52.745: INFO: namespace: e2e-tests-hostpath-65p5l, resource: bindings, ignored listing per whitelist Feb 13 11:42:52.758: INFO: namespace e2e-tests-hostpath-65p5l deletion completed in 6.559461889s • [SLOW TEST:21.048 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:42:52.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-f813254f-4e55-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:42:52.994: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-xkd5b" to be "success or failure" Feb 13 11:42:53.179: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 185.37473ms Feb 13 11:42:55.457: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463005877s Feb 13 11:42:57.469: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475672715s Feb 13 11:42:59.872: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.878652808s Feb 13 11:43:01.892: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.898695986s Feb 13 11:43:04.017: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.023152427s Feb 13 11:43:06.064: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.070426502s STEP: Saw pod success Feb 13 11:43:06.064: INFO: Pod "pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:43:06.077: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Feb 13 11:43:06.978: INFO: Waiting for pod pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007 to disappear Feb 13 11:43:07.415: INFO: Pod pod-projected-configmaps-f817dcd6-4e55-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:43:07.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xkd5b" for this suite. Feb 13 11:43:13.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:43:14.302: INFO: namespace: e2e-tests-projected-xkd5b, resource: bindings, ignored listing per whitelist Feb 13 11:43:14.399: INFO: namespace e2e-tests-projected-xkd5b deletion completed in 6.963125285s • [SLOW TEST:21.641 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:43:14.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 13 11:43:14.803: INFO: Number of nodes with available pods: 0 Feb 13 11:43:14.803: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:16.651: INFO: Number of nodes with available pods: 0 Feb 13 11:43:16.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:17.012: INFO: Number of nodes with available pods: 0 Feb 13 11:43:17.013: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:17.823: INFO: Number of nodes with available pods: 0 Feb 13 11:43:17.823: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:18.828: INFO: Number of nodes with available pods: 0 Feb 13 11:43:18.828: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:19.877: INFO: Number of nodes with available pods: 0 Feb 13 11:43:19.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:21.516: INFO: Number of nodes with available pods: 0 Feb 13 11:43:21.516: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:22.516: INFO: Number of nodes with available pods: 0 Feb 13 11:43:22.516: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:22.842: INFO: Number of nodes with available pods: 0 Feb 13 11:43:22.842: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:23.843: INFO: Number of nodes with available pods: 0 Feb 13 11:43:23.843: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:24.854: INFO: Number of nodes with available pods: 1 Feb 13 11:43:24.854: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 13 11:43:24.910: INFO: Number of nodes with available pods: 0 Feb 13 11:43:24.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:25.934: INFO: Number of nodes with available pods: 0 Feb 13 11:43:25.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:27.026: INFO: Number of nodes with available pods: 0 Feb 13 11:43:27.026: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:28.064: INFO: Number of nodes with available pods: 0 Feb 13 11:43:28.064: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:29.034: INFO: Number of nodes with available pods: 0 Feb 13 11:43:29.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:29.941: INFO: Number of nodes with available pods: 0 Feb 13 11:43:29.941: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:30.932: INFO: Number of nodes with available pods: 0 Feb 13 11:43:30.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:31.932: INFO: Number of nodes with available pods: 0 Feb 13 11:43:31.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:32.933: INFO: Number of nodes with available pods: 0 Feb 13 11:43:32.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:34.017: INFO: Number of nodes with available pods: 0 Feb 13 11:43:34.017: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:35.268: INFO: Number of nodes with available pods: 0 Feb 13 11:43:35.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:35.933: INFO: Number of nodes with available pods: 0 Feb 13 11:43:35.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:36.933: INFO: Number of nodes with available pods: 0 Feb 13 11:43:36.933: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:37.936: INFO: Number of nodes with available pods: 0 Feb 13 11:43:37.936: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:38.991: INFO: Number of nodes with available pods: 0 Feb 13 11:43:38.991: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:39.934: INFO: Number of nodes with available pods: 0 Feb 13 11:43:39.934: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:40.927: INFO: Number of nodes with available pods: 0 Feb 13 11:43:40.927: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:41.936: INFO: Number of nodes with available pods: 0 Feb 13 11:43:41.936: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:42.935: INFO: Number of nodes with available pods: 0 Feb 13 11:43:42.935: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:44.364: INFO: Number of nodes with available pods: 0 Feb 13 11:43:44.365: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:45.178: INFO: Number of nodes with available pods: 0 Feb 13 11:43:45.178: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:45.929: INFO: Number of nodes with available pods: 0 Feb 13 11:43:45.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:46.931: INFO: Number of nodes with available pods: 0 Feb 13 11:43:46.931: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:48.858: INFO: Number of nodes with available pods: 0 Feb 13 11:43:48.859: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:49.227: INFO: Number of nodes with available pods: 0 Feb 13 11:43:49.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:50.001: INFO: Number of nodes with available pods: 0 Feb 13 11:43:50.001: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:51.008: INFO: Number of nodes with available pods: 0 Feb 13 11:43:51.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:51.944: INFO: Number of nodes with available pods: 0 Feb 13 11:43:51.944: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 11:43:52.939: INFO: Number of nodes with available pods: 1 Feb 13 11:43:52.939: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-lccjk, will wait for the garbage collector to delete the pods Feb 13 11:43:53.049: INFO: Deleting DaemonSet.extensions daemon-set took: 27.184126ms Feb 13 11:43:53.150: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.451818ms Feb 13 11:44:12.721: INFO: Number of nodes with available pods: 0 Feb 13 11:44:12.721: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 11:44:12.729: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-lccjk/daemonsets","resourceVersion":"21526879"},"items":null} Feb 13 11:44:12.735: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-lccjk/pods","resourceVersion":"21526879"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:44:12.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-lccjk" for this suite. Feb 13 11:44:18.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:44:18.890: INFO: namespace: e2e-tests-daemonsets-lccjk, resource: bindings, ignored listing per whitelist Feb 13 11:44:18.942: INFO: namespace e2e-tests-daemonsets-lccjk deletion completed in 6.19283413s • [SLOW TEST:64.543 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:44:18.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:44:19.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-h7z9q" to be "success or failure" Feb 13 11:44:19.916: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 200.244292ms Feb 13 11:44:21.930: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213714088s Feb 13 11:44:24.136: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420474186s Feb 13 11:44:26.151: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435136171s Feb 13 11:44:28.565: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.848947702s Feb 13 11:44:30.606: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890411773s Feb 13 11:44:32.634: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.917773098s STEP: Saw pod success Feb 13 11:44:32.634: INFO: Pod "downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:44:32.645: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:44:32.773: INFO: Waiting for pod downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007 to disappear Feb 13 11:44:32.836: INFO: Pod downwardapi-volume-2bc45844-4e56-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:44:32.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-h7z9q" for this suite. Feb 13 11:44:39.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:44:39.971: INFO: namespace: e2e-tests-downward-api-h7z9q, resource: bindings, ignored listing per whitelist Feb 13 11:44:40.128: INFO: namespace e2e-tests-downward-api-h7z9q deletion completed in 7.27820809s • [SLOW TEST:21.185 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:44:40.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 13 11:44:51.219: INFO: Successfully updated pod "pod-update-activedeadlineseconds-382a31bf-4e56-11ea-aba9-0242ac110007" Feb 13 11:44:51.219: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-382a31bf-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-pods-tbb7d" to be "terminated due to deadline exceeded" Feb 13 11:44:51.244: INFO: Pod "pod-update-activedeadlineseconds-382a31bf-4e56-11ea-aba9-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 24.878951ms Feb 13 11:44:55.082: INFO: Pod "pod-update-activedeadlineseconds-382a31bf-4e56-11ea-aba9-0242ac110007": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 3.863104249s Feb 13 11:44:55.082: INFO: Pod "pod-update-activedeadlineseconds-382a31bf-4e56-11ea-aba9-0242ac110007" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:44:55.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tbb7d" for this suite. Feb 13 11:45:01.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:45:01.544: INFO: namespace: e2e-tests-pods-tbb7d, resource: bindings, ignored listing per whitelist Feb 13 11:45:01.549: INFO: namespace e2e-tests-pods-tbb7d deletion completed in 6.241776274s • [SLOW TEST:21.420 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:45:01.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:45:13.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-q264f" for this suite. Feb 13 11:45:20.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:45:20.140: INFO: namespace: e2e-tests-kubelet-test-q264f, resource: bindings, ignored listing per whitelist Feb 13 11:45:20.157: INFO: namespace e2e-tests-kubelet-test-q264f deletion completed in 6.191830767s • [SLOW TEST:18.608 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:45:20.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:45:20.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-mbzt9" to be "success or failure" Feb 13 11:45:20.448: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 99.452455ms Feb 13 11:45:22.555: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206057718s Feb 13 11:45:24.585: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23658831s Feb 13 11:45:26.671: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.322044744s Feb 13 11:45:31.267: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.918056009s Feb 13 11:45:33.288: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.939499801s Feb 13 11:45:35.303: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.954554889s STEP: Saw pod success Feb 13 11:45:35.303: INFO: Pod "downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:45:35.311: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:45:35.744: INFO: Waiting for pod downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007 to disappear Feb 13 11:45:36.301: INFO: Pod downwardapi-volume-4fedd3b2-4e56-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:45:36.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mbzt9" for this suite. Feb 13 11:45:42.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:45:42.696: INFO: namespace: e2e-tests-projected-mbzt9, resource: bindings, ignored listing per whitelist Feb 13 11:45:42.722: INFO: namespace e2e-tests-projected-mbzt9 deletion completed in 6.398244959s • [SLOW TEST:22.564 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:45:42.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 13 11:46:05.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:05.363: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:07.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:07.380: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:09.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:09.373: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:11.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:11.376: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:13.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:13.386: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:15.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:15.393: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:17.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:17.420: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:19.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:19.416: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:21.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:21.389: INFO: Pod pod-with-prestop-http-hook still exists Feb 13 11:46:23.364: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 13 11:46:23.391: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:46:23.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-csqb7" for this suite. Feb 13 11:46:47.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:46:47.877: INFO: namespace: e2e-tests-container-lifecycle-hook-csqb7, resource: bindings, ignored listing per whitelist Feb 13 11:46:47.942: INFO: namespace e2e-tests-container-lifecycle-hook-csqb7 deletion completed in 24.509131536s • [SLOW TEST:65.220 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:46:47.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-84413438-4e56-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:46:48.160: INFO: Waiting up to 5m0s for pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-smclb" to be "success or failure" Feb 13 11:46:48.176: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.206954ms Feb 13 11:46:50.208: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047410447s Feb 13 11:46:52.236: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076191005s Feb 13 11:46:54.490: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329659407s Feb 13 11:46:56.513: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35311277s Feb 13 11:46:58.536: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.376043007s STEP: Saw pod success Feb 13 11:46:58.536: INFO: Pod "pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:46:58.551: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 11:46:58.775: INFO: Waiting for pod pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007 to disappear Feb 13 11:46:58.798: INFO: Pod pod-configmaps-84457874-4e56-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:46:58.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-smclb" for this suite. Feb 13 11:47:04.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:47:04.978: INFO: namespace: e2e-tests-configmap-smclb, resource: bindings, ignored listing per whitelist Feb 13 11:47:05.134: INFO: namespace e2e-tests-configmap-smclb deletion completed in 6.318353862s • [SLOW TEST:17.191 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:47:05.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 11:47:05.396: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:47:15.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rxnb5" for this suite. Feb 13 11:47:59.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:47:59.693: INFO: namespace: e2e-tests-pods-rxnb5, resource: bindings, ignored listing per whitelist Feb 13 11:47:59.723: INFO: namespace e2e-tests-pods-rxnb5 deletion completed in 44.195146634s • [SLOW TEST:54.588 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:47:59.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-af05e780-4e56-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 11:47:59.907: INFO: Waiting up to 5m0s for pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-9tcrr" to be "success or failure" Feb 13 11:47:59.916: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.301133ms Feb 13 11:48:02.251: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344135946s Feb 13 11:48:04.275: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367657747s Feb 13 11:48:06.292: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384469131s Feb 13 11:48:08.310: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.40227803s Feb 13 11:48:10.326: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.418713324s Feb 13 11:48:12.340: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.432639314s STEP: Saw pod success Feb 13 11:48:12.340: INFO: Pod "pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:48:12.346: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 11:48:13.182: INFO: Waiting for pod pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007 to disappear Feb 13 11:48:13.197: INFO: Pod pod-secrets-af06c55f-4e56-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:48:13.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9tcrr" for this suite. Feb 13 11:48:20.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:48:20.317: INFO: namespace: e2e-tests-secrets-9tcrr, resource: bindings, ignored listing per whitelist Feb 13 11:48:20.355: INFO: namespace e2e-tests-secrets-9tcrr deletion completed in 6.908691558s • [SLOW TEST:20.632 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:48:20.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:48:30.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-whdsn" for this suite. Feb 13 11:49:12.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:49:12.907: INFO: namespace: e2e-tests-kubelet-test-whdsn, resource: bindings, ignored listing per whitelist Feb 13 11:49:13.005: INFO: namespace e2e-tests-kubelet-test-whdsn deletion completed in 42.230654071s • [SLOW TEST:52.650 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:49:13.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:49:13.282: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-46ghn" to be "success or failure" Feb 13 11:49:13.293: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.881954ms Feb 13 11:49:15.643: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360795252s Feb 13 11:49:17.665: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382722212s Feb 13 11:49:19.679: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396480212s Feb 13 11:49:21.714: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432123653s Feb 13 11:49:23.866: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.584172417s Feb 13 11:49:26.048: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.766256934s STEP: Saw pod success Feb 13 11:49:26.049: INFO: Pod "downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:49:26.058: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:49:26.131: INFO: Waiting for pod downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007 to disappear Feb 13 11:49:26.221: INFO: Pod downwardapi-volume-dac0dd1f-4e56-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:49:26.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-46ghn" for this suite. Feb 13 11:49:32.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:49:32.302: INFO: namespace: e2e-tests-downward-api-46ghn, resource: bindings, ignored listing per whitelist Feb 13 11:49:32.371: INFO: namespace e2e-tests-downward-api-46ghn deletion completed in 6.140503739s • [SLOW TEST:19.365 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:49:32.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 13 11:49:43.216: INFO: Successfully updated pod "labelsupdatee64565e1-4e56-11ea-aba9-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:49:45.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n2bj2" for this suite. Feb 13 11:50:09.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:50:09.556: INFO: namespace: e2e-tests-downward-api-n2bj2, resource: bindings, ignored listing per whitelist Feb 13 11:50:09.604: INFO: namespace e2e-tests-downward-api-n2bj2 deletion completed in 24.273717094s • [SLOW TEST:37.232 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:50:09.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:50:09.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-dv77q" to be "success or failure" Feb 13 11:50:09.830: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.938607ms Feb 13 11:50:11.913: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088694328s Feb 13 11:50:13.927: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102562622s Feb 13 11:50:16.021: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196037536s Feb 13 11:50:18.093: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268407056s Feb 13 11:50:20.126: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.301510476s STEP: Saw pod success Feb 13 11:50:20.126: INFO: Pod "downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:50:20.138: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:50:20.260: INFO: Waiting for pod downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007 to disappear Feb 13 11:50:20.268: INFO: Pod downwardapi-volume-fc75a707-4e56-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:50:20.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dv77q" for this suite. Feb 13 11:50:26.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:50:26.594: INFO: namespace: e2e-tests-downward-api-dv77q, resource: bindings, ignored listing per whitelist Feb 13 11:50:26.689: INFO: namespace e2e-tests-downward-api-dv77q deletion completed in 6.403448842s • [SLOW TEST:17.085 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:50:26.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:50:37.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-s68h4" for this suite. Feb 13 11:51:25.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:51:25.257: INFO: namespace: e2e-tests-kubelet-test-s68h4, resource: bindings, ignored listing per whitelist Feb 13 11:51:25.286: INFO: namespace e2e-tests-kubelet-test-s68h4 deletion completed in 48.151136698s • [SLOW TEST:58.596 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:51:25.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-hhl77 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-hhl77 STEP: Deleting pre-stop pod Feb 13 11:51:46.751: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:51:46.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-hhl77" for this suite. Feb 13 11:52:26.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:52:26.944: INFO: namespace: e2e-tests-prestop-hhl77, resource: bindings, ignored listing per whitelist Feb 13 11:52:26.997: INFO: namespace e2e-tests-prestop-hhl77 deletion completed in 40.21974159s • [SLOW TEST:61.710 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:52:26.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 13 11:52:37.493: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:53:19.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-vnkd5" for this suite. Feb 13 11:53:25.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:53:25.787: INFO: namespace: e2e-tests-namespaces-vnkd5, resource: bindings, ignored listing per whitelist Feb 13 11:53:25.903: INFO: namespace e2e-tests-namespaces-vnkd5 deletion completed in 6.37225287s STEP: Destroying namespace "e2e-tests-nsdeletetest-ww822" for this suite. Feb 13 11:53:25.907: INFO: Namespace e2e-tests-nsdeletetest-ww822 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-k7r52" for this suite. Feb 13 11:53:31.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:53:32.033: INFO: namespace: e2e-tests-nsdeletetest-k7r52, resource: bindings, ignored listing per whitelist Feb 13 11:53:32.133: INFO: namespace e2e-tests-nsdeletetest-k7r52 deletion completed in 6.226039544s • [SLOW TEST:65.136 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:53:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-vq9j STEP: Creating a pod to test atomic-volume-subpath Feb 13 11:53:32.510: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vq9j" in namespace "e2e-tests-subpath-x6cj2" to be "success or failure" Feb 13 11:53:32.562: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 52.446633ms Feb 13 11:53:34.619: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108847442s Feb 13 11:53:36.631: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121337921s Feb 13 11:53:39.928: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 7.417809516s Feb 13 11:53:42.000: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 9.489654964s Feb 13 11:53:44.041: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 11.530921699s Feb 13 11:53:46.131: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 13.62083997s Feb 13 11:53:48.151: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 15.641098293s Feb 13 11:53:50.183: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 17.67287349s Feb 13 11:53:52.202: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 19.6922563s Feb 13 11:53:54.218: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 21.708205198s Feb 13 11:53:56.242: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 23.731898333s Feb 13 11:53:58.264: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 25.75410076s Feb 13 11:54:00.279: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 27.768766841s Feb 13 11:54:02.295: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 29.784973661s Feb 13 11:54:04.314: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 31.804260803s Feb 13 11:54:06.364: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Running", Reason="", readiness=false. Elapsed: 33.853735582s Feb 13 11:54:08.481: INFO: Pod "pod-subpath-test-downwardapi-vq9j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.971224782s STEP: Saw pod success Feb 13 11:54:08.481: INFO: Pod "pod-subpath-test-downwardapi-vq9j" satisfied condition "success or failure" Feb 13 11:54:08.498: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-vq9j container test-container-subpath-downwardapi-vq9j: STEP: delete the pod Feb 13 11:54:08.661: INFO: Waiting for pod pod-subpath-test-downwardapi-vq9j to disappear Feb 13 11:54:08.675: INFO: Pod pod-subpath-test-downwardapi-vq9j no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vq9j Feb 13 11:54:08.675: INFO: Deleting pod "pod-subpath-test-downwardapi-vq9j" in namespace "e2e-tests-subpath-x6cj2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:54:08.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x6cj2" for this suite. Feb 13 11:54:14.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:54:14.894: INFO: namespace: e2e-tests-subpath-x6cj2, resource: bindings, ignored listing per whitelist Feb 13 11:54:14.955: INFO: namespace e2e-tests-subpath-x6cj2 deletion completed in 6.261318327s • [SLOW TEST:42.821 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:54:14.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-srn2j [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 13 11:54:15.162: INFO: Found 0 stateful pods, waiting for 3 Feb 13 11:54:25.202: INFO: Found 1 stateful pods, waiting for 3 Feb 13 11:54:35.182: INFO: Found 2 stateful pods, waiting for 3 Feb 13 11:54:45.185: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:54:45.185: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:54:45.185: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 13 11:54:55.216: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:54:55.216: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:54:55.216: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 13 11:54:55.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-srn2j ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 11:54:56.022: INFO: stderr: "I0213 11:54:55.504511 1455 log.go:172] (0xc0007202c0) (0xc000748640) Create stream\nI0213 11:54:55.504952 1455 log.go:172] (0xc0007202c0) (0xc000748640) Stream added, broadcasting: 1\nI0213 11:54:55.523480 1455 log.go:172] (0xc0007202c0) Reply frame received for 1\nI0213 11:54:55.523594 1455 log.go:172] (0xc0007202c0) (0xc0007486e0) Create stream\nI0213 11:54:55.523616 1455 log.go:172] (0xc0007202c0) (0xc0007486e0) Stream added, broadcasting: 3\nI0213 11:54:55.526118 1455 log.go:172] (0xc0007202c0) Reply frame received for 3\nI0213 11:54:55.526163 1455 log.go:172] (0xc0007202c0) (0xc0006e0c80) Create stream\nI0213 11:54:55.526184 1455 log.go:172] (0xc0007202c0) (0xc0006e0c80) Stream added, broadcasting: 5\nI0213 11:54:55.527587 1455 log.go:172] (0xc0007202c0) Reply frame received for 5\nI0213 11:54:55.876037 1455 log.go:172] (0xc0007202c0) Data frame received for 3\nI0213 11:54:55.876225 1455 log.go:172] (0xc0007486e0) (3) Data frame handling\nI0213 11:54:55.876267 1455 log.go:172] (0xc0007486e0) (3) Data frame sent\nI0213 11:54:56.009518 1455 log.go:172] (0xc0007202c0) (0xc0006e0c80) Stream removed, broadcasting: 5\nI0213 11:54:56.009888 1455 log.go:172] (0xc0007202c0) Data frame received for 1\nI0213 11:54:56.009930 1455 log.go:172] (0xc0007202c0) (0xc0007486e0) Stream removed, broadcasting: 3\nI0213 11:54:56.010065 1455 log.go:172] (0xc000748640) (1) Data frame handling\nI0213 11:54:56.010117 1455 log.go:172] (0xc000748640) (1) Data frame sent\nI0213 11:54:56.010194 1455 log.go:172] (0xc0007202c0) (0xc000748640) Stream removed, broadcasting: 1\nI0213 11:54:56.010269 1455 log.go:172] (0xc0007202c0) Go away received\nI0213 11:54:56.010897 1455 log.go:172] (0xc0007202c0) (0xc000748640) Stream removed, broadcasting: 1\nI0213 11:54:56.010919 1455 log.go:172] (0xc0007202c0) (0xc0007486e0) Stream removed, broadcasting: 3\nI0213 11:54:56.010937 1455 log.go:172] (0xc0007202c0) (0xc0006e0c80) Stream removed, broadcasting: 5\n" Feb 13 11:54:56.022: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 11:54:56.022: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 13 11:55:06.132: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 13 11:55:16.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-srn2j ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:55:16.805: INFO: stderr: "I0213 11:55:16.414805 1477 log.go:172] (0xc000154840) (0xc00059d360) Create stream\nI0213 11:55:16.415298 1477 log.go:172] (0xc000154840) (0xc00059d360) Stream added, broadcasting: 1\nI0213 11:55:16.432730 1477 log.go:172] (0xc000154840) Reply frame received for 1\nI0213 11:55:16.432806 1477 log.go:172] (0xc000154840) (0xc00074c000) Create stream\nI0213 11:55:16.432823 1477 log.go:172] (0xc000154840) (0xc00074c000) Stream added, broadcasting: 3\nI0213 11:55:16.435917 1477 log.go:172] (0xc000154840) Reply frame received for 3\nI0213 11:55:16.435961 1477 log.go:172] (0xc000154840) (0xc00059d400) Create stream\nI0213 11:55:16.435974 1477 log.go:172] (0xc000154840) (0xc00059d400) Stream added, broadcasting: 5\nI0213 11:55:16.437123 1477 log.go:172] (0xc000154840) Reply frame received for 5\nI0213 11:55:16.638155 1477 log.go:172] (0xc000154840) Data frame received for 3\nI0213 11:55:16.638264 1477 log.go:172] (0xc00074c000) (3) Data frame handling\nI0213 11:55:16.638292 1477 log.go:172] (0xc00074c000) (3) Data frame sent\nI0213 11:55:16.789567 1477 log.go:172] (0xc000154840) (0xc00074c000) Stream removed, broadcasting: 3\nI0213 11:55:16.789950 1477 log.go:172] (0xc000154840) Data frame received for 1\nI0213 11:55:16.790064 1477 log.go:172] (0xc00059d360) (1) Data frame handling\nI0213 11:55:16.790094 1477 log.go:172] (0xc00059d360) (1) Data frame sent\nI0213 11:55:16.790192 1477 log.go:172] (0xc000154840) (0xc00059d360) Stream removed, broadcasting: 1\nI0213 11:55:16.794269 1477 log.go:172] (0xc000154840) (0xc00059d400) Stream removed, broadcasting: 5\nI0213 11:55:16.794621 1477 log.go:172] (0xc000154840) (0xc00059d360) Stream removed, broadcasting: 1\nI0213 11:55:16.794737 1477 log.go:172] (0xc000154840) (0xc00074c000) Stream removed, broadcasting: 3\nI0213 11:55:16.794771 1477 log.go:172] (0xc000154840) (0xc00059d400) Stream removed, broadcasting: 5\nI0213 11:55:16.794822 1477 log.go:172] (0xc000154840) Go away received\n" Feb 13 11:55:16.805: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 11:55:16.805: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 11:55:27.022: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:55:27.022: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 11:55:27.022: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 11:55:37.048: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:55:37.048: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 11:55:37.048: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 11:55:47.067: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:55:47.067: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 11:55:57.042: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:55:57.042: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 13 11:56:07.040: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update STEP: Rolling back to a previous revision Feb 13 11:56:17.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-srn2j ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 11:56:18.071: INFO: stderr: "I0213 11:56:17.311071 1499 log.go:172] (0xc000168840) (0xc000593400) Create stream\nI0213 11:56:17.311318 1499 log.go:172] (0xc000168840) (0xc000593400) Stream added, broadcasting: 1\nI0213 11:56:17.340311 1499 log.go:172] (0xc000168840) Reply frame received for 1\nI0213 11:56:17.340692 1499 log.go:172] (0xc000168840) (0xc0005d8000) Create stream\nI0213 11:56:17.340837 1499 log.go:172] (0xc000168840) (0xc0005d8000) Stream added, broadcasting: 3\nI0213 11:56:17.343045 1499 log.go:172] (0xc000168840) Reply frame received for 3\nI0213 11:56:17.343151 1499 log.go:172] (0xc000168840) (0xc000590000) Create stream\nI0213 11:56:17.343186 1499 log.go:172] (0xc000168840) (0xc000590000) Stream added, broadcasting: 5\nI0213 11:56:17.347502 1499 log.go:172] (0xc000168840) Reply frame received for 5\nI0213 11:56:17.903877 1499 log.go:172] (0xc000168840) Data frame received for 3\nI0213 11:56:17.904075 1499 log.go:172] (0xc0005d8000) (3) Data frame handling\nI0213 11:56:17.904109 1499 log.go:172] (0xc0005d8000) (3) Data frame sent\nI0213 11:56:18.058333 1499 log.go:172] (0xc000168840) Data frame received for 1\nI0213 11:56:18.058565 1499 log.go:172] (0xc000593400) (1) Data frame handling\nI0213 11:56:18.058623 1499 log.go:172] (0xc000593400) (1) Data frame sent\nI0213 11:56:18.058792 1499 log.go:172] (0xc000168840) (0xc000593400) Stream removed, broadcasting: 1\nI0213 11:56:18.059404 1499 log.go:172] (0xc000168840) (0xc0005d8000) Stream removed, broadcasting: 3\nI0213 11:56:18.060576 1499 log.go:172] (0xc000168840) (0xc000590000) Stream removed, broadcasting: 5\nI0213 11:56:18.060627 1499 log.go:172] (0xc000168840) Go away received\nI0213 11:56:18.060872 1499 log.go:172] (0xc000168840) (0xc000593400) Stream removed, broadcasting: 1\nI0213 11:56:18.061041 1499 log.go:172] (0xc000168840) (0xc0005d8000) Stream removed, broadcasting: 3\nI0213 11:56:18.061080 1499 log.go:172] (0xc000168840) (0xc000590000) Stream removed, broadcasting: 5\n" Feb 13 11:56:18.071: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 11:56:18.071: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 11:56:28.247: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 13 11:56:38.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-srn2j ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 11:56:38.997: INFO: stderr: "I0213 11:56:38.652593 1521 log.go:172] (0xc00087a2c0) (0xc00071e640) Create stream\nI0213 11:56:38.652769 1521 log.go:172] (0xc00087a2c0) (0xc00071e640) Stream added, broadcasting: 1\nI0213 11:56:38.658675 1521 log.go:172] (0xc00087a2c0) Reply frame received for 1\nI0213 11:56:38.658820 1521 log.go:172] (0xc00087a2c0) (0xc0005badc0) Create stream\nI0213 11:56:38.658832 1521 log.go:172] (0xc00087a2c0) (0xc0005badc0) Stream added, broadcasting: 3\nI0213 11:56:38.660013 1521 log.go:172] (0xc00087a2c0) Reply frame received for 3\nI0213 11:56:38.660033 1521 log.go:172] (0xc00087a2c0) (0xc00071e6e0) Create stream\nI0213 11:56:38.660039 1521 log.go:172] (0xc00087a2c0) (0xc00071e6e0) Stream added, broadcasting: 5\nI0213 11:56:38.661062 1521 log.go:172] (0xc00087a2c0) Reply frame received for 5\nI0213 11:56:38.811546 1521 log.go:172] (0xc00087a2c0) Data frame received for 3\nI0213 11:56:38.811728 1521 log.go:172] (0xc0005badc0) (3) Data frame handling\nI0213 11:56:38.811766 1521 log.go:172] (0xc0005badc0) (3) Data frame sent\nI0213 11:56:38.988677 1521 log.go:172] (0xc00087a2c0) (0xc0005badc0) Stream removed, broadcasting: 3\nI0213 11:56:38.989013 1521 log.go:172] (0xc00087a2c0) Data frame received for 1\nI0213 11:56:38.989264 1521 log.go:172] (0xc00087a2c0) (0xc00071e6e0) Stream removed, broadcasting: 5\nI0213 11:56:38.989431 1521 log.go:172] (0xc00071e640) (1) Data frame handling\nI0213 11:56:38.989507 1521 log.go:172] (0xc00071e640) (1) Data frame sent\nI0213 11:56:38.989531 1521 log.go:172] (0xc00087a2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0213 11:56:38.989550 1521 log.go:172] (0xc00087a2c0) Go away received\nI0213 11:56:38.990617 1521 log.go:172] (0xc00087a2c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0213 11:56:38.990654 1521 log.go:172] (0xc00087a2c0) (0xc0005badc0) Stream removed, broadcasting: 3\nI0213 11:56:38.990664 1521 log.go:172] (0xc00087a2c0) (0xc00071e6e0) Stream removed, broadcasting: 5\n" Feb 13 11:56:38.997: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 11:56:38.997: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 11:56:49.105: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:56:49.105: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:56:49.105: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:56:49.105: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:56:59.131: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:56:59.131: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:56:59.131: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:57:09.136: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:57:09.136: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:57:09.136: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:57:20.522: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:57:20.522: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:57:29.278: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update Feb 13 11:57:29.278: INFO: Waiting for Pod e2e-tests-statefulset-srn2j/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 13 11:57:39.124: INFO: Waiting for StatefulSet e2e-tests-statefulset-srn2j/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 13 11:57:49.145: INFO: Deleting all statefulset in ns e2e-tests-statefulset-srn2j Feb 13 11:57:49.153: INFO: Scaling statefulset ss2 to 0 Feb 13 11:58:29.262: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 11:58:29.270: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:58:29.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-srn2j" for this suite. Feb 13 11:58:37.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:58:37.621: INFO: namespace: e2e-tests-statefulset-srn2j, resource: bindings, ignored listing per whitelist Feb 13 11:58:37.649: INFO: namespace e2e-tests-statefulset-srn2j deletion completed in 8.269243105s • [SLOW TEST:262.694 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:58:37.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mf25p in namespace e2e-tests-proxy-2nd55 I0213 11:58:38.115965 8 runners.go:184] Created replication controller with name: proxy-service-mf25p, namespace: e2e-tests-proxy-2nd55, replica count: 1 I0213 11:58:39.166832 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:40.167108 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:41.167394 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:42.168247 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:43.168607 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:44.168897 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:45.171688 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:46.172016 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:47.172351 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 11:58:48.172655 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0213 11:58:49.173498 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0213 11:58:50.174104 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0213 11:58:51.174473 8 runners.go:184] proxy-service-mf25p Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 13 11:58:51.191: INFO: setup took 13.162905186s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 13 11:58:51.227: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-2nd55/pods/proxy-service-mf25p-flqg5/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 11:59:09.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-dxkqc" to be "success or failure" Feb 13 11:59:09.081: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 32.993593ms Feb 13 11:59:11.097: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049205448s Feb 13 11:59:13.110: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062264451s Feb 13 11:59:15.450: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401958553s Feb 13 11:59:18.240: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191900736s Feb 13 11:59:20.881: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.832979496s Feb 13 11:59:22.925: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.87635896s STEP: Saw pod success Feb 13 11:59:22.925: INFO: Pod "downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:59:22.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 11:59:23.473: INFO: Waiting for pod downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 11:59:23.498: INFO: Pod downwardapi-volume-3ddfbc15-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:59:23.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dxkqc" for this suite. Feb 13 11:59:29.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:59:29.857: INFO: namespace: e2e-tests-downward-api-dxkqc, resource: bindings, ignored listing per whitelist Feb 13 11:59:29.884: INFO: namespace e2e-tests-downward-api-dxkqc deletion completed in 6.376730034s • [SLOW TEST:21.053 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:59:29.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-4a75c41c-4e58-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 11:59:30.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-6jdlx" to be "success or failure" Feb 13 11:59:30.280: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.371921ms Feb 13 11:59:32.299: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037548076s Feb 13 11:59:34.310: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049009709s Feb 13 11:59:36.334: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073032946s Feb 13 11:59:38.916: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.655119039s Feb 13 11:59:40.940: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67864354s Feb 13 11:59:42.962: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.700521673s STEP: Saw pod success Feb 13 11:59:42.962: INFO: Pod "pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 11:59:42.970: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 11:59:43.246: INFO: Waiting for pod pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 11:59:43.385: INFO: Pod pod-configmaps-4a76e3f4-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 11:59:43.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6jdlx" for this suite. Feb 13 11:59:49.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 11:59:49.479: INFO: namespace: e2e-tests-configmap-6jdlx, resource: bindings, ignored listing per whitelist Feb 13 11:59:49.693: INFO: namespace e2e-tests-configmap-6jdlx deletion completed in 6.28043131s • [SLOW TEST:19.809 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 11:59:49.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-qpr4 STEP: Creating a pod to test atomic-volume-subpath Feb 13 11:59:50.212: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qpr4" in namespace "e2e-tests-subpath-wq4tk" to be "success or failure" Feb 13 11:59:50.361: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 148.723971ms Feb 13 11:59:52.476: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264336566s Feb 13 11:59:54.521: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309435194s Feb 13 11:59:56.557: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345388187s Feb 13 11:59:58.588: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.376176413s Feb 13 12:00:01.104: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.892451929s Feb 13 12:00:04.762: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.549806225s Feb 13 12:00:06.780: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.567698498s Feb 13 12:00:08.798: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.5864065s Feb 13 12:00:10.815: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 20.603221754s Feb 13 12:00:12.834: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 22.622267496s Feb 13 12:00:14.853: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 24.640935558s Feb 13 12:00:16.877: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 26.665287189s Feb 13 12:00:18.898: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 28.685691182s Feb 13 12:00:20.916: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 30.703894084s Feb 13 12:00:22.932: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 32.719948777s Feb 13 12:00:24.955: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 34.742567972s Feb 13 12:00:27.009: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 36.797220936s Feb 13 12:00:29.043: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Running", Reason="", readiness=false. Elapsed: 38.830855183s Feb 13 12:00:31.549: INFO: Pod "pod-subpath-test-configmap-qpr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.337158583s STEP: Saw pod success Feb 13 12:00:31.549: INFO: Pod "pod-subpath-test-configmap-qpr4" satisfied condition "success or failure" Feb 13 12:00:31.564: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-qpr4 container test-container-subpath-configmap-qpr4: STEP: delete the pod Feb 13 12:00:32.016: INFO: Waiting for pod pod-subpath-test-configmap-qpr4 to disappear Feb 13 12:00:32.049: INFO: Pod pod-subpath-test-configmap-qpr4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qpr4 Feb 13 12:00:32.050: INFO: Deleting pod "pod-subpath-test-configmap-qpr4" in namespace "e2e-tests-subpath-wq4tk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:00:32.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-wq4tk" for this suite. Feb 13 12:00:38.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:00:38.272: INFO: namespace: e2e-tests-subpath-wq4tk, resource: bindings, ignored listing per whitelist Feb 13 12:00:38.437: INFO: namespace e2e-tests-subpath-wq4tk deletion completed in 6.361594994s • [SLOW TEST:48.744 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:00:38.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-7350ec3a-4e58-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 12:00:38.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-4zxlp" to be "success or failure" Feb 13 12:00:38.738: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.131694ms Feb 13 12:00:41.069: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350846791s Feb 13 12:00:43.093: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37497654s Feb 13 12:00:45.116: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398309541s Feb 13 12:00:47.131: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.413291552s Feb 13 12:00:49.155: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.437359601s STEP: Saw pod success Feb 13 12:00:49.155: INFO: Pod "pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:00:49.168: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007 container configmap-volume-test: STEP: delete the pod Feb 13 12:00:49.266: INFO: Waiting for pod pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:00:49.287: INFO: Pod pod-configmaps-73521ffb-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:00:49.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4zxlp" for this suite. Feb 13 12:00:55.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:00:55.674: INFO: namespace: e2e-tests-configmap-4zxlp, resource: bindings, ignored listing per whitelist Feb 13 12:00:55.688: INFO: namespace e2e-tests-configmap-4zxlp deletion completed in 6.340793971s • [SLOW TEST:17.250 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:00:55.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 13 12:00:55.920: INFO: Waiting up to 5m0s for pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-88kxd" to be "success or failure" Feb 13 12:00:55.935: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.164058ms Feb 13 12:00:57.964: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043112684s Feb 13 12:00:59.976: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055663192s Feb 13 12:01:02.032: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111679577s Feb 13 12:01:04.068: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147976384s Feb 13 12:01:06.122: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.201923508s STEP: Saw pod success Feb 13 12:01:06.123: INFO: Pod "downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:01:06.169: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007 container dapi-container: STEP: delete the pod Feb 13 12:01:06.483: INFO: Waiting for pod downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:01:06.634: INFO: Pod downward-api-7d8ce05f-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:01:06.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-88kxd" for this suite. Feb 13 12:01:12.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:01:12.932: INFO: namespace: e2e-tests-downward-api-88kxd, resource: bindings, ignored listing per whitelist Feb 13 12:01:12.967: INFO: namespace e2e-tests-downward-api-88kxd deletion completed in 6.309310809s • [SLOW TEST:17.278 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:01:12.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-87da8b9a-4e58-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume configMaps Feb 13 12:01:13.228: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-bwflh" to be "success or failure" Feb 13 12:01:13.239: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.646094ms Feb 13 12:01:15.252: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024200057s Feb 13 12:01:17.261: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032919339s Feb 13 12:01:19.775: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.547587577s Feb 13 12:01:22.024: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796722843s Feb 13 12:01:24.033: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.805495455s STEP: Saw pod success Feb 13 12:01:24.033: INFO: Pod "pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:01:24.038: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007 container projected-configmap-volume-test: STEP: delete the pod Feb 13 12:01:24.832: INFO: Waiting for pod pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:01:24.858: INFO: Pod pod-projected-configmaps-87db79c6-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:01:24.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bwflh" for this suite. Feb 13 12:01:30.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:01:31.155: INFO: namespace: e2e-tests-projected-bwflh, resource: bindings, ignored listing per whitelist Feb 13 12:01:31.158: INFO: namespace e2e-tests-projected-bwflh deletion completed in 6.279432119s • [SLOW TEST:18.191 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:01:31.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-92ab4cc1-4e58-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 12:01:31.381: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-8fnhp" to be "success or failure" Feb 13 12:01:31.391: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.706004ms Feb 13 12:01:33.732: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351011652s Feb 13 12:01:35.749: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367914449s Feb 13 12:01:38.135: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.753805541s Feb 13 12:01:40.159: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.77804627s Feb 13 12:01:42.173: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.791261232s STEP: Saw pod success Feb 13 12:01:42.173: INFO: Pod "pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:01:42.180: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 13 12:01:42.889: INFO: Waiting for pod pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:01:42.964: INFO: Pod pod-projected-secrets-92ac2fa8-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:01:42.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8fnhp" for this suite. Feb 13 12:01:49.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:01:49.804: INFO: namespace: e2e-tests-projected-8fnhp, resource: bindings, ignored listing per whitelist Feb 13 12:01:49.813: INFO: namespace e2e-tests-projected-8fnhp deletion completed in 6.840833206s • [SLOW TEST:18.655 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:01:49.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 13 12:01:50.069: INFO: Waiting up to 5m0s for pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-vrln4" to be "success or failure" Feb 13 12:01:50.088: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.58361ms Feb 13 12:01:52.102: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032763813s Feb 13 12:01:54.159: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089335907s Feb 13 12:01:56.585: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.515316019s Feb 13 12:01:58.601: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532018856s Feb 13 12:02:00.632: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.562454382s STEP: Saw pod success Feb 13 12:02:00.632: INFO: Pod "pod-9dd924ef-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:02:00.640: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9dd924ef-4e58-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 12:02:00.860: INFO: Waiting for pod pod-9dd924ef-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:02:00.882: INFO: Pod pod-9dd924ef-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:02:00.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vrln4" for this suite. Feb 13 12:02:06.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:02:07.045: INFO: namespace: e2e-tests-emptydir-vrln4, resource: bindings, ignored listing per whitelist Feb 13 12:02:07.080: INFO: namespace e2e-tests-emptydir-vrln4 deletion completed in 6.188991613s • [SLOW TEST:17.267 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:02:07.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Feb 13 12:02:17.359: INFO: Pod pod-hostip-a815acfe-4e58-11ea-aba9-0242ac110007 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:02:17.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mn8tx" for this suite. Feb 13 12:02:41.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:02:41.730: INFO: namespace: e2e-tests-pods-mn8tx, resource: bindings, ignored listing per whitelist Feb 13 12:02:41.780: INFO: namespace e2e-tests-pods-mn8tx deletion completed in 24.412890249s • [SLOW TEST:34.700 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:02:41.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 13 12:02:42.008: INFO: Waiting up to 5m0s for pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-c62qk" to be "success or failure" Feb 13 12:02:42.024: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.97821ms Feb 13 12:02:44.044: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035543296s Feb 13 12:02:46.059: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05063912s Feb 13 12:02:48.267: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25884894s Feb 13 12:02:50.636: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.627615016s Feb 13 12:02:52.680: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.671421575s STEP: Saw pod success Feb 13 12:02:52.680: INFO: Pod "downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:02:52.685: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007 container dapi-container: STEP: delete the pod Feb 13 12:02:52.900: INFO: Waiting for pod downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:02:52.913: INFO: Pod downward-api-bccc9b64-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:02:52.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c62qk" for this suite. Feb 13 12:02:58.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:02:59.084: INFO: namespace: e2e-tests-downward-api-c62qk, resource: bindings, ignored listing per whitelist Feb 13 12:02:59.201: INFO: namespace e2e-tests-downward-api-c62qk deletion completed in 6.279624237s • [SLOW TEST:17.419 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:02:59.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 13 12:02:59.662: INFO: Waiting up to 5m0s for pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-jzsms" to be "success or failure" Feb 13 12:02:59.695: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.029198ms Feb 13 12:03:01.721: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059054345s Feb 13 12:03:03.746: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083370835s Feb 13 12:03:07.251: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.588633576s Feb 13 12:03:09.275: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.61240938s Feb 13 12:03:11.295: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.633062703s STEP: Saw pod success Feb 13 12:03:11.295: INFO: Pod "pod-c73cf552-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:03:11.302: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c73cf552-4e58-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 12:03:11.639: INFO: Waiting for pod pod-c73cf552-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:03:11.686: INFO: Pod pod-c73cf552-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:03:11.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jzsms" for this suite. Feb 13 12:03:20.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:03:20.672: INFO: namespace: e2e-tests-emptydir-jzsms, resource: bindings, ignored listing per whitelist Feb 13 12:03:20.766: INFO: namespace e2e-tests-emptydir-jzsms deletion completed in 9.069623417s • [SLOW TEST:21.563 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:03:20.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 13 12:03:21.117: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 13 12:03:21.133: INFO: Waiting for terminating namespaces to be deleted... Feb 13 12:03:21.138: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 13 12:03:21.155: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 12:03:21.155: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 13 12:03:21.155: INFO: Container weave ready: true, restart count 0 Feb 13 12:03:21.155: INFO: Container weave-npc ready: true, restart count 0 Feb 13 12:03:21.155: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 13 12:03:21.155: INFO: Container coredns ready: true, restart count 0 Feb 13 12:03:21.155: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 12:03:21.155: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 12:03:21.155: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 13 12:03:21.155: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 13 12:03:21.155: INFO: Container coredns ready: true, restart count 0 Feb 13 12:03:21.155: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 13 12:03:21.155: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f2f4a342ee4948], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:03:22.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-hh7k8" for this suite. Feb 13 12:03:28.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:03:28.604: INFO: namespace: e2e-tests-sched-pred-hh7k8, resource: bindings, ignored listing per whitelist Feb 13 12:03:28.629: INFO: namespace e2e-tests-sched-pred-hh7k8 deletion completed in 6.40646923s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.863 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:03:28.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-d8bf9d2b-4e58-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 12:03:28.899: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-hn2wk" to be "success or failure" Feb 13 12:03:28.913: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.167984ms Feb 13 12:03:30.999: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099938666s Feb 13 12:03:33.027: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128058606s Feb 13 12:03:36.006: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.107324431s Feb 13 12:03:38.021: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.121705684s Feb 13 12:03:40.029: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.130578954s STEP: Saw pod success Feb 13 12:03:40.030: INFO: Pod "pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:03:40.033: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 13 12:03:40.150: INFO: Waiting for pod pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007 to disappear Feb 13 12:03:40.247: INFO: Pod pod-projected-secrets-d8c121bc-4e58-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:03:40.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hn2wk" for this suite. Feb 13 12:03:46.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:03:46.429: INFO: namespace: e2e-tests-projected-hn2wk, resource: bindings, ignored listing per whitelist Feb 13 12:03:46.591: INFO: namespace e2e-tests-projected-hn2wk deletion completed in 6.331173253s • [SLOW TEST:17.961 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:03:46.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:03:59.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-nm7mj" for this suite. Feb 13 12:04:24.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:04:24.173: INFO: namespace: e2e-tests-replication-controller-nm7mj, resource: bindings, ignored listing per whitelist Feb 13 12:04:24.218: INFO: namespace e2e-tests-replication-controller-nm7mj deletion completed in 24.236735493s • [SLOW TEST:37.626 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:04:24.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 13 12:04:24.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:28.009: INFO: stderr: "" Feb 13 12:04:28.009: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 13 12:04:28.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:28.343: INFO: stderr: "" Feb 13 12:04:28.343: INFO: stdout: "update-demo-nautilus-bdw4f update-demo-nautilus-pp8cl " Feb 13 12:04:28.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdw4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:28.476: INFO: stderr: "" Feb 13 12:04:28.476: INFO: stdout: "" Feb 13 12:04:28.476: INFO: update-demo-nautilus-bdw4f is created but not running Feb 13 12:04:33.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:33.648: INFO: stderr: "" Feb 13 12:04:33.648: INFO: stdout: "update-demo-nautilus-bdw4f update-demo-nautilus-pp8cl " Feb 13 12:04:33.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdw4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:33.816: INFO: stderr: "" Feb 13 12:04:33.816: INFO: stdout: "" Feb 13 12:04:33.816: INFO: update-demo-nautilus-bdw4f is created but not running Feb 13 12:04:38.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:39.028: INFO: stderr: "" Feb 13 12:04:39.028: INFO: stdout: "update-demo-nautilus-bdw4f update-demo-nautilus-pp8cl " Feb 13 12:04:39.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdw4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:39.266: INFO: stderr: "" Feb 13 12:04:39.266: INFO: stdout: "" Feb 13 12:04:39.266: INFO: update-demo-nautilus-bdw4f is created but not running Feb 13 12:04:44.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:44.412: INFO: stderr: "" Feb 13 12:04:44.412: INFO: stdout: "update-demo-nautilus-bdw4f update-demo-nautilus-pp8cl " Feb 13 12:04:44.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdw4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:44.564: INFO: stderr: "" Feb 13 12:04:44.564: INFO: stdout: "true" Feb 13 12:04:44.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdw4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:44.726: INFO: stderr: "" Feb 13 12:04:44.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 12:04:44.726: INFO: validating pod update-demo-nautilus-bdw4f Feb 13 12:04:44.791: INFO: got data: { "image": "nautilus.jpg" } Feb 13 12:04:44.791: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 12:04:44.792: INFO: update-demo-nautilus-bdw4f is verified up and running Feb 13 12:04:44.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pp8cl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:44.902: INFO: stderr: "" Feb 13 12:04:44.902: INFO: stdout: "true" Feb 13 12:04:44.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pp8cl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:45.085: INFO: stderr: "" Feb 13 12:04:45.085: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 13 12:04:45.085: INFO: validating pod update-demo-nautilus-pp8cl Feb 13 12:04:45.120: INFO: got data: { "image": "nautilus.jpg" } Feb 13 12:04:45.120: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 13 12:04:45.120: INFO: update-demo-nautilus-pp8cl is verified up and running STEP: using delete to clean up resources Feb 13 12:04:45.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:45.250: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 13 12:04:45.250: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 13 12:04:45.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-n4gqz' Feb 13 12:04:45.455: INFO: stderr: "No resources found.\n" Feb 13 12:04:45.455: INFO: stdout: "" Feb 13 12:04:45.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-n4gqz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 13 12:04:45.627: INFO: stderr: "" Feb 13 12:04:45.627: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:04:45.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n4gqz" for this suite. Feb 13 12:05:09.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:05:09.746: INFO: namespace: e2e-tests-kubectl-n4gqz, resource: bindings, ignored listing per whitelist Feb 13 12:05:09.861: INFO: namespace e2e-tests-kubectl-n4gqz deletion completed in 24.209295666s • [SLOW TEST:45.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:05:09.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4jhhj [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4jhhj STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4jhhj Feb 13 12:05:10.264: INFO: Found 0 stateful pods, waiting for 1 Feb 13 12:05:20.274: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 13 12:05:30.281: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 13 12:05:30.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 12:05:31.215: INFO: stderr: "I0213 12:05:30.645396 1861 log.go:172] (0xc000138840) (0xc0005c74a0) Create stream\nI0213 12:05:30.645978 1861 log.go:172] (0xc000138840) (0xc0005c74a0) Stream added, broadcasting: 1\nI0213 12:05:30.656641 1861 log.go:172] (0xc000138840) Reply frame received for 1\nI0213 12:05:30.656690 1861 log.go:172] (0xc000138840) (0xc0005c7540) Create stream\nI0213 12:05:30.656708 1861 log.go:172] (0xc000138840) (0xc0005c7540) Stream added, broadcasting: 3\nI0213 12:05:30.657897 1861 log.go:172] (0xc000138840) Reply frame received for 3\nI0213 12:05:30.657959 1861 log.go:172] (0xc000138840) (0xc000788000) Create stream\nI0213 12:05:30.657995 1861 log.go:172] (0xc000138840) (0xc000788000) Stream added, broadcasting: 5\nI0213 12:05:30.659496 1861 log.go:172] (0xc000138840) Reply frame received for 5\nI0213 12:05:31.046718 1861 log.go:172] (0xc000138840) Data frame received for 3\nI0213 12:05:31.046969 1861 log.go:172] (0xc0005c7540) (3) Data frame handling\nI0213 12:05:31.047028 1861 log.go:172] (0xc0005c7540) (3) Data frame sent\nI0213 12:05:31.198741 1861 log.go:172] (0xc000138840) Data frame received for 1\nI0213 12:05:31.198960 1861 log.go:172] (0xc000138840) (0xc000788000) Stream removed, broadcasting: 5\nI0213 12:05:31.199054 1861 log.go:172] (0xc0005c74a0) (1) Data frame handling\nI0213 12:05:31.199107 1861 log.go:172] (0xc0005c74a0) (1) Data frame sent\nI0213 12:05:31.199244 1861 log.go:172] (0xc000138840) (0xc0005c7540) Stream removed, broadcasting: 3\nI0213 12:05:31.199287 1861 log.go:172] (0xc000138840) (0xc0005c74a0) Stream removed, broadcasting: 1\nI0213 12:05:31.199342 1861 log.go:172] (0xc000138840) Go away received\nI0213 12:05:31.200948 1861 log.go:172] (0xc000138840) (0xc0005c74a0) Stream removed, broadcasting: 1\nI0213 12:05:31.200981 1861 log.go:172] (0xc000138840) (0xc0005c7540) Stream removed, broadcasting: 3\nI0213 12:05:31.200999 1861 log.go:172] (0xc000138840) (0xc000788000) Stream removed, broadcasting: 5\n" Feb 13 12:05:31.215: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 12:05:31.215: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 12:05:31.226: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 13 12:05:41.244: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 12:05:41.244: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 12:05:41.392: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:05:41.392: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:05:41.393: INFO: Feb 13 12:05:41.393: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 13 12:05:42.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.876156383s Feb 13 12:05:44.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.85632609s Feb 13 12:05:46.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.399449427s Feb 13 12:05:47.760: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.52651148s Feb 13 12:05:48.789: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.508762685s Feb 13 12:05:49.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.479680213s Feb 13 12:05:51.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 469.815557ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4jhhj Feb 13 12:05:52.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:05:54.010: INFO: stderr: "I0213 12:05:52.704573 1883 log.go:172] (0xc0006f8370) (0xc000716640) Create stream\nI0213 12:05:52.704903 1883 log.go:172] (0xc0006f8370) (0xc000716640) Stream added, broadcasting: 1\nI0213 12:05:52.713124 1883 log.go:172] (0xc0006f8370) Reply frame received for 1\nI0213 12:05:52.713176 1883 log.go:172] (0xc0006f8370) (0xc000590c80) Create stream\nI0213 12:05:52.713191 1883 log.go:172] (0xc0006f8370) (0xc000590c80) Stream added, broadcasting: 3\nI0213 12:05:52.714728 1883 log.go:172] (0xc0006f8370) Reply frame received for 3\nI0213 12:05:52.714747 1883 log.go:172] (0xc0006f8370) (0xc0007166e0) Create stream\nI0213 12:05:52.714752 1883 log.go:172] (0xc0006f8370) (0xc0007166e0) Stream added, broadcasting: 5\nI0213 12:05:52.717346 1883 log.go:172] (0xc0006f8370) Reply frame received for 5\nI0213 12:05:53.706380 1883 log.go:172] (0xc0006f8370) Data frame received for 3\nI0213 12:05:53.706653 1883 log.go:172] (0xc000590c80) (3) Data frame handling\nI0213 12:05:53.706710 1883 log.go:172] (0xc000590c80) (3) Data frame sent\nI0213 12:05:53.998634 1883 log.go:172] (0xc0006f8370) Data frame received for 1\nI0213 12:05:53.998801 1883 log.go:172] (0xc0006f8370) (0xc000590c80) Stream removed, broadcasting: 3\nI0213 12:05:53.998871 1883 log.go:172] (0xc000716640) (1) Data frame handling\nI0213 12:05:53.998933 1883 log.go:172] (0xc000716640) (1) Data frame sent\nI0213 12:05:53.998959 1883 log.go:172] (0xc0006f8370) (0xc0007166e0) Stream removed, broadcasting: 5\nI0213 12:05:53.998987 1883 log.go:172] (0xc0006f8370) (0xc000716640) Stream removed, broadcasting: 1\nI0213 12:05:53.999022 1883 log.go:172] (0xc0006f8370) Go away received\nI0213 12:05:53.999649 1883 log.go:172] (0xc0006f8370) (0xc000716640) Stream removed, broadcasting: 1\nI0213 12:05:53.999681 1883 log.go:172] (0xc0006f8370) (0xc000590c80) Stream removed, broadcasting: 3\nI0213 12:05:53.999700 1883 log.go:172] (0xc0006f8370) (0xc0007166e0) Stream removed, broadcasting: 5\n" Feb 13 12:05:54.010: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 12:05:54.010: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 12:05:54.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:05:54.561: INFO: rc: 1 Feb 13 12:05:54.562: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00207ade0 exit status 1 true [0xc0010cc6a8 0xc0010cc6c0 0xc0010cc6d8] [0xc0010cc6a8 0xc0010cc6c0 0xc0010cc6d8] [0xc0010cc6b8 0xc0010cc6d0] [0x935700 0x935700] 0xc00176b6e0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 13 12:06:04.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:06:05.121: INFO: stderr: "I0213 12:06:04.767886 1926 log.go:172] (0xc0002102c0) (0xc000561720) Create stream\nI0213 12:06:04.768194 1926 log.go:172] (0xc0002102c0) (0xc000561720) Stream added, broadcasting: 1\nI0213 12:06:04.777182 1926 log.go:172] (0xc0002102c0) Reply frame received for 1\nI0213 12:06:04.777244 1926 log.go:172] (0xc0002102c0) (0xc00084e000) Create stream\nI0213 12:06:04.777287 1926 log.go:172] (0xc0002102c0) (0xc00084e000) Stream added, broadcasting: 3\nI0213 12:06:04.778152 1926 log.go:172] (0xc0002102c0) Reply frame received for 3\nI0213 12:06:04.778175 1926 log.go:172] (0xc0002102c0) (0xc0003a7c20) Create stream\nI0213 12:06:04.778185 1926 log.go:172] (0xc0002102c0) (0xc0003a7c20) Stream added, broadcasting: 5\nI0213 12:06:04.779076 1926 log.go:172] (0xc0002102c0) Reply frame received for 5\nI0213 12:06:04.909109 1926 log.go:172] (0xc0002102c0) Data frame received for 5\nI0213 12:06:04.909297 1926 log.go:172] (0xc0003a7c20) (5) Data frame handling\nI0213 12:06:04.909318 1926 log.go:172] (0xc0003a7c20) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0213 12:06:04.909348 1926 log.go:172] (0xc0002102c0) Data frame received for 3\nI0213 12:06:04.909357 1926 log.go:172] (0xc00084e000) (3) Data frame handling\nI0213 12:06:04.909369 1926 log.go:172] (0xc00084e000) (3) Data frame sent\nI0213 12:06:05.108414 1926 log.go:172] (0xc0002102c0) Data frame received for 1\nI0213 12:06:05.108619 1926 log.go:172] (0xc0002102c0) (0xc00084e000) Stream removed, broadcasting: 3\nI0213 12:06:05.110207 1926 log.go:172] (0xc0002102c0) (0xc0003a7c20) Stream removed, broadcasting: 5\nI0213 12:06:05.112970 1926 log.go:172] (0xc000561720) (1) Data frame handling\nI0213 12:06:05.113018 1926 log.go:172] (0xc000561720) (1) Data frame sent\nI0213 12:06:05.113046 1926 log.go:172] (0xc0002102c0) (0xc000561720) Stream removed, broadcasting: 1\nI0213 12:06:05.113641 1926 log.go:172] (0xc0002102c0) Go away received\nI0213 12:06:05.114224 1926 log.go:172] (0xc0002102c0) (0xc000561720) Stream removed, broadcasting: 1\nI0213 12:06:05.114254 1926 log.go:172] (0xc0002102c0) (0xc00084e000) Stream removed, broadcasting: 3\nI0213 12:06:05.114272 1926 log.go:172] (0xc0002102c0) (0xc0003a7c20) Stream removed, broadcasting: 5\n" Feb 13 12:06:05.122: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 12:06:05.122: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 12:06:05.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:06:05.740: INFO: stderr: "I0213 12:06:05.287210 1947 log.go:172] (0xc000726370) (0xc000746640) Create stream\nI0213 12:06:05.287468 1947 log.go:172] (0xc000726370) (0xc000746640) Stream added, broadcasting: 1\nI0213 12:06:05.324534 1947 log.go:172] (0xc000726370) Reply frame received for 1\nI0213 12:06:05.324666 1947 log.go:172] (0xc000726370) (0xc0005dcc80) Create stream\nI0213 12:06:05.324682 1947 log.go:172] (0xc000726370) (0xc0005dcc80) Stream added, broadcasting: 3\nI0213 12:06:05.327092 1947 log.go:172] (0xc000726370) Reply frame received for 3\nI0213 12:06:05.327116 1947 log.go:172] (0xc000726370) (0xc000724000) Create stream\nI0213 12:06:05.327124 1947 log.go:172] (0xc000726370) (0xc000724000) Stream added, broadcasting: 5\nI0213 12:06:05.329063 1947 log.go:172] (0xc000726370) Reply frame received for 5\nI0213 12:06:05.573763 1947 log.go:172] (0xc000726370) Data frame received for 3\nI0213 12:06:05.573974 1947 log.go:172] (0xc0005dcc80) (3) Data frame handling\nI0213 12:06:05.574001 1947 log.go:172] (0xc0005dcc80) (3) Data frame sent\nI0213 12:06:05.574127 1947 log.go:172] (0xc000726370) Data frame received for 5\nI0213 12:06:05.574231 1947 log.go:172] (0xc000724000) (5) Data frame handling\nI0213 12:06:05.574280 1947 log.go:172] (0xc000724000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0213 12:06:05.721459 1947 log.go:172] (0xc000726370) Data frame received for 1\nI0213 12:06:05.722096 1947 log.go:172] (0xc000726370) (0xc0005dcc80) Stream removed, broadcasting: 3\nI0213 12:06:05.722625 1947 log.go:172] (0xc000746640) (1) Data frame handling\nI0213 12:06:05.722853 1947 log.go:172] (0xc000746640) (1) Data frame sent\nI0213 12:06:05.722934 1947 log.go:172] (0xc000726370) (0xc000724000) Stream removed, broadcasting: 5\nI0213 12:06:05.723047 1947 log.go:172] (0xc000726370) (0xc000746640) Stream removed, broadcasting: 1\nI0213 12:06:05.723137 1947 log.go:172] (0xc000726370) Go away received\nI0213 12:06:05.725287 1947 log.go:172] (0xc000726370) (0xc000746640) Stream removed, broadcasting: 1\nI0213 12:06:05.725337 1947 log.go:172] (0xc000726370) (0xc0005dcc80) Stream removed, broadcasting: 3\nI0213 12:06:05.725364 1947 log.go:172] (0xc000726370) (0xc000724000) Stream removed, broadcasting: 5\n" Feb 13 12:06:05.741: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 13 12:06:05.741: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 13 12:06:05.766: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:06:05.766: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 12:06:05.766: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 13 12:06:05.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 12:06:06.355: INFO: stderr: "I0213 12:06:05.953980 1969 log.go:172] (0xc0001d84d0) (0xc00068d2c0) Create stream\nI0213 12:06:05.954314 1969 log.go:172] (0xc0001d84d0) (0xc00068d2c0) Stream added, broadcasting: 1\nI0213 12:06:05.961451 1969 log.go:172] (0xc0001d84d0) Reply frame received for 1\nI0213 12:06:05.961589 1969 log.go:172] (0xc0001d84d0) (0xc0007be000) Create stream\nI0213 12:06:05.961626 1969 log.go:172] (0xc0001d84d0) (0xc0007be000) Stream added, broadcasting: 3\nI0213 12:06:05.962816 1969 log.go:172] (0xc0001d84d0) Reply frame received for 3\nI0213 12:06:05.962843 1969 log.go:172] (0xc0001d84d0) (0xc00068d360) Create stream\nI0213 12:06:05.962855 1969 log.go:172] (0xc0001d84d0) (0xc00068d360) Stream added, broadcasting: 5\nI0213 12:06:05.964417 1969 log.go:172] (0xc0001d84d0) Reply frame received for 5\nI0213 12:06:06.228258 1969 log.go:172] (0xc0001d84d0) Data frame received for 3\nI0213 12:06:06.228416 1969 log.go:172] (0xc0007be000) (3) Data frame handling\nI0213 12:06:06.228447 1969 log.go:172] (0xc0007be000) (3) Data frame sent\nI0213 12:06:06.344884 1969 log.go:172] (0xc0001d84d0) (0xc0007be000) Stream removed, broadcasting: 3\nI0213 12:06:06.345086 1969 log.go:172] (0xc0001d84d0) Data frame received for 1\nI0213 12:06:06.345152 1969 log.go:172] (0xc0001d84d0) (0xc00068d360) Stream removed, broadcasting: 5\nI0213 12:06:06.345238 1969 log.go:172] (0xc00068d2c0) (1) Data frame handling\nI0213 12:06:06.345287 1969 log.go:172] (0xc00068d2c0) (1) Data frame sent\nI0213 12:06:06.345319 1969 log.go:172] (0xc0001d84d0) (0xc00068d2c0) Stream removed, broadcasting: 1\nI0213 12:06:06.345345 1969 log.go:172] (0xc0001d84d0) Go away received\nI0213 12:06:06.345879 1969 log.go:172] (0xc0001d84d0) (0xc00068d2c0) Stream removed, broadcasting: 1\nI0213 12:06:06.345895 1969 log.go:172] (0xc0001d84d0) (0xc0007be000) Stream removed, broadcasting: 3\nI0213 12:06:06.345904 1969 log.go:172] (0xc0001d84d0) (0xc00068d360) Stream removed, broadcasting: 5\n" Feb 13 12:06:06.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 12:06:06.355: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 12:06:06.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 12:06:07.182: INFO: stderr: "I0213 12:06:06.592728 1991 log.go:172] (0xc0008522c0) (0xc00088e5a0) Create stream\nI0213 12:06:06.593453 1991 log.go:172] (0xc0008522c0) (0xc00088e5a0) Stream added, broadcasting: 1\nI0213 12:06:06.608197 1991 log.go:172] (0xc0008522c0) Reply frame received for 1\nI0213 12:06:06.608316 1991 log.go:172] (0xc0008522c0) (0xc0005d6dc0) Create stream\nI0213 12:06:06.608389 1991 log.go:172] (0xc0008522c0) (0xc0005d6dc0) Stream added, broadcasting: 3\nI0213 12:06:06.612349 1991 log.go:172] (0xc0008522c0) Reply frame received for 3\nI0213 12:06:06.612546 1991 log.go:172] (0xc0008522c0) (0xc00088e640) Create stream\nI0213 12:06:06.612571 1991 log.go:172] (0xc0008522c0) (0xc00088e640) Stream added, broadcasting: 5\nI0213 12:06:06.615544 1991 log.go:172] (0xc0008522c0) Reply frame received for 5\nI0213 12:06:06.832526 1991 log.go:172] (0xc0008522c0) Data frame received for 3\nI0213 12:06:06.832711 1991 log.go:172] (0xc0005d6dc0) (3) Data frame handling\nI0213 12:06:06.832764 1991 log.go:172] (0xc0005d6dc0) (3) Data frame sent\nI0213 12:06:07.174159 1991 log.go:172] (0xc0008522c0) (0xc00088e640) Stream removed, broadcasting: 5\nI0213 12:06:07.174323 1991 log.go:172] (0xc0008522c0) Data frame received for 1\nI0213 12:06:07.174359 1991 log.go:172] (0xc0008522c0) (0xc0005d6dc0) Stream removed, broadcasting: 3\nI0213 12:06:07.174420 1991 log.go:172] (0xc00088e5a0) (1) Data frame handling\nI0213 12:06:07.174457 1991 log.go:172] (0xc00088e5a0) (1) Data frame sent\nI0213 12:06:07.174478 1991 log.go:172] (0xc0008522c0) (0xc00088e5a0) Stream removed, broadcasting: 1\nI0213 12:06:07.174497 1991 log.go:172] (0xc0008522c0) Go away received\nI0213 12:06:07.175196 1991 log.go:172] (0xc0008522c0) (0xc00088e5a0) Stream removed, broadcasting: 1\nI0213 12:06:07.175207 1991 log.go:172] (0xc0008522c0) (0xc0005d6dc0) Stream removed, broadcasting: 3\nI0213 12:06:07.175212 1991 log.go:172] (0xc0008522c0) (0xc00088e640) Stream removed, broadcasting: 5\n" Feb 13 12:06:07.182: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 12:06:07.182: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 12:06:07.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 13 12:06:07.572: INFO: stderr: "I0213 12:06:07.332016 2013 log.go:172] (0xc00071a2c0) (0xc000744640) Create stream\nI0213 12:06:07.332269 2013 log.go:172] (0xc00071a2c0) (0xc000744640) Stream added, broadcasting: 1\nI0213 12:06:07.336878 2013 log.go:172] (0xc00071a2c0) Reply frame received for 1\nI0213 12:06:07.336919 2013 log.go:172] (0xc00071a2c0) (0xc000660c80) Create stream\nI0213 12:06:07.336926 2013 log.go:172] (0xc00071a2c0) (0xc000660c80) Stream added, broadcasting: 3\nI0213 12:06:07.338210 2013 log.go:172] (0xc00071a2c0) Reply frame received for 3\nI0213 12:06:07.338238 2013 log.go:172] (0xc00071a2c0) (0xc0006ba000) Create stream\nI0213 12:06:07.338246 2013 log.go:172] (0xc00071a2c0) (0xc0006ba000) Stream added, broadcasting: 5\nI0213 12:06:07.339505 2013 log.go:172] (0xc00071a2c0) Reply frame received for 5\nI0213 12:06:07.471358 2013 log.go:172] (0xc00071a2c0) Data frame received for 3\nI0213 12:06:07.471446 2013 log.go:172] (0xc000660c80) (3) Data frame handling\nI0213 12:06:07.471478 2013 log.go:172] (0xc000660c80) (3) Data frame sent\nI0213 12:06:07.561280 2013 log.go:172] (0xc00071a2c0) (0xc000660c80) Stream removed, broadcasting: 3\nI0213 12:06:07.561584 2013 log.go:172] (0xc00071a2c0) Data frame received for 1\nI0213 12:06:07.561611 2013 log.go:172] (0xc000744640) (1) Data frame handling\nI0213 12:06:07.561652 2013 log.go:172] (0xc000744640) (1) Data frame sent\nI0213 12:06:07.561664 2013 log.go:172] (0xc00071a2c0) (0xc000744640) Stream removed, broadcasting: 1\nI0213 12:06:07.562383 2013 log.go:172] (0xc00071a2c0) (0xc0006ba000) Stream removed, broadcasting: 5\nI0213 12:06:07.562474 2013 log.go:172] (0xc00071a2c0) (0xc000744640) Stream removed, broadcasting: 1\nI0213 12:06:07.562527 2013 log.go:172] (0xc00071a2c0) (0xc000660c80) Stream removed, broadcasting: 3\nI0213 12:06:07.562601 2013 log.go:172] (0xc00071a2c0) (0xc0006ba000) Stream removed, broadcasting: 5\nI0213 12:06:07.562643 2013 log.go:172] (0xc00071a2c0) Go away received\n" Feb 13 12:06:07.572: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 13 12:06:07.572: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 13 12:06:07.572: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 12:06:07.582: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 13 12:06:17.616: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 12:06:17.616: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 13 12:06:17.616: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 13 12:06:17.678: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:17.678: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:17.678: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:17.678: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:17.678: INFO: Feb 13 12:06:17.678: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:18.721: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:18.721: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:18.721: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:18.721: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:18.721: INFO: Feb 13 12:06:18.721: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:19.980: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:19.980: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:19.980: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:19.980: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:19.980: INFO: Feb 13 12:06:19.980: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:21.045: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:21.045: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:21.045: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:21.045: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:21.045: INFO: Feb 13 12:06:21.045: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:22.083: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:22.083: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:22.084: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:22.084: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:22.084: INFO: Feb 13 12:06:22.084: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:23.884: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:23.884: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:23.884: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:23.884: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:23.884: INFO: Feb 13 12:06:23.884: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:25.451: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:25.451: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:25.451: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:25.451: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:25.451: INFO: Feb 13 12:06:25.451: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 12:06:26.746: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 12:06:26.746: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:10 +0000 UTC }] Feb 13 12:06:26.746: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:26.746: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:06:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:05:41 +0000 UTC }] Feb 13 12:06:26.747: INFO: Feb 13 12:06:26.747: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4jhhj Feb 13 12:06:27.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:06:28.084: INFO: rc: 1 Feb 13 12:06:28.084: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0022f30e0 exit status 1 true [0xc0019f4050 0xc0019f4068 0xc0019f4080] [0xc0019f4050 0xc0019f4068 0xc0019f4080] [0xc0019f4060 0xc0019f4078] [0x935700 0x935700] 0xc00143ad80 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 13 12:06:38.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:06:38.225: INFO: rc: 1 Feb 13 12:06:38.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f3200 exit status 1 true [0xc0019f4088 0xc0019f40a0 0xc0019f40b8] [0xc0019f4088 0xc0019f40a0 0xc0019f40b8] [0xc0019f4098 0xc0019f40b0] [0x935700 0x935700] 0xc00143b380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:06:48.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:06:48.488: INFO: rc: 1 Feb 13 12:06:48.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f3320 exit status 1 true [0xc0019f40c0 0xc0019f40d8 0xc0019f40f0] [0xc0019f40c0 0xc0019f40d8 0xc0019f40f0] [0xc0019f40d0 0xc0019f40e8] [0x935700 0x935700] 0xc00143b9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:06:58.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:06:58.664: INFO: rc: 1 Feb 13 12:06:58.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f3440 exit status 1 true [0xc0019f40f8 0xc0019f4110 0xc0019f4128] [0xc0019f40f8 0xc0019f4110 0xc0019f4128] [0xc0019f4108 0xc0019f4120] [0x935700 0x935700] 0xc0013d4180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:07:08.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:07:08.827: INFO: rc: 1 Feb 13 12:07:08.827: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f37d0 exit status 1 true [0xc0019f4130 0xc0019f4148 0xc0019f4160] [0xc0019f4130 0xc0019f4148 0xc0019f4160] [0xc0019f4140 0xc0019f4158] [0x935700 0x935700] 0xc0013d4a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:07:18.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:07:19.137: INFO: rc: 1 Feb 13 12:07:19.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f3950 exit status 1 true [0xc0019f4168 0xc0019f4180 0xc0019f4198] [0xc0019f4168 0xc0019f4180 0xc0019f4198] [0xc0019f4178 0xc0019f4190] [0x935700 0x935700] 0xc0013d54a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:07:29.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:07:29.291: INFO: rc: 1 Feb 13 12:07:29.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f3aa0 exit status 1 true [0xc0019f41a0 0xc0019f41b8 0xc0019f41d0] [0xc0019f41a0 0xc0019f41b8 0xc0019f41d0] [0xc0019f41b0 0xc0019f41c8] [0x935700 0x935700] 0xc0013d5ce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:07:39.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:07:39.490: INFO: rc: 1 Feb 13 12:07:39.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015ba150 exit status 1 true [0xc0010cc000 0xc0010cc018 0xc0010cc030] [0xc0010cc000 0xc0010cc018 0xc0010cc030] [0xc0010cc010 0xc0010cc028] [0x935700 0x935700] 0xc0019afb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:07:49.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:07:49.662: INFO: rc: 1 Feb 13 12:07:49.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207aba0 exit status 1 true [0xc000184090 0xc000184300 0xc000184510] [0xc000184090 0xc000184300 0xc000184510] [0xc000184298 0xc000184368] [0x935700 0x935700] 0xc00176b140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:07:59.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:07:59.853: INFO: rc: 1 Feb 13 12:07:59.854: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015ba2a0 exit status 1 true [0xc0010cc038 0xc0010cc050 0xc0010cc068] [0xc0010cc038 0xc0010cc050 0xc0010cc068] [0xc0010cc048 0xc0010cc060] [0x935700 0x935700] 0xc0019c6f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:08:09.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:08:10.027: INFO: rc: 1 Feb 13 12:08:10.027: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa1e0 exit status 1 true [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea010 0xc000fea028] [0x935700 0x935700] 0xc0009f9920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:08:20.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:08:20.144: INFO: rc: 1 Feb 13 12:08:20.144: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f3bc0 exit status 1 true [0xc0019f41d8 0xc0019f41f0 0xc0019f4210] [0xc0019f41d8 0xc0019f41f0 0xc0019f4210] [0xc0019f41e8 0xc0019f4208] [0x935700 0x935700] 0xc000bc6660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:08:30.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:08:30.526: INFO: rc: 1 Feb 13 12:08:30.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a120 exit status 1 true [0xc000184090 0xc000184300 0xc000184510] [0xc000184090 0xc000184300 0xc000184510] [0xc000184298 0xc000184368] [0x935700 0x935700] 0xc001848fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:08:40.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:08:40.691: INFO: rc: 1 Feb 13 12:08:40.691: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa1b0 exit status 1 true [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea000 0xc000fea018 0xc000fea030] [0xc000fea010 0xc000fea028] [0x935700 0x935700] 0xc0019afb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:08:50.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:08:50.815: INFO: rc: 1 Feb 13 12:08:50.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa330 exit status 1 true [0xc000fea038 0xc000fea050 0xc000fea068] [0xc000fea038 0xc000fea050 0xc000fea068] [0xc000fea048 0xc000fea060] [0x935700 0x935700] 0xc0013d45a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:09:00.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:09:01.747: INFO: rc: 1 Feb 13 12:09:01.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa510 exit status 1 true [0xc000fea070 0xc000fea088 0xc000fea0a0] [0xc000fea070 0xc000fea088 0xc000fea0a0] [0xc000fea080 0xc000fea098] [0x935700 0x935700] 0xc0013d5140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:09:11.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:09:11.929: INFO: rc: 1 Feb 13 12:09:11.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a240 exit status 1 true [0xc000184540 0xc000184600 0xc000184680] [0xc000184540 0xc000184600 0xc000184680] [0xc0001845f0 0xc000184630] [0x935700 0x935700] 0xc00143ad20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:09:21.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:09:22.181: INFO: rc: 1 Feb 13 12:09:22.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f2210 exit status 1 true [0xc0019f4000 0xc0019f4018 0xc0019f4030] [0xc0019f4000 0xc0019f4018 0xc0019f4030] [0xc0019f4010 0xc0019f4028] [0x935700 0x935700] 0xc00176a060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:09:32.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:09:32.357: INFO: rc: 1 Feb 13 12:09:32.358: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa690 exit status 1 true [0xc000fea0a8 0xc000fea0c0 0xc000fea0d8] [0xc000fea0a8 0xc000fea0c0 0xc000fea0d8] [0xc000fea0b8 0xc000fea0d0] [0x935700 0x935700] 0xc0013d5920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:09:42.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:09:42.521: INFO: rc: 1 Feb 13 12:09:42.521: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a390 exit status 1 true [0xc0001846e0 0xc000184728 0xc000184778] [0xc0001846e0 0xc000184728 0xc000184778] [0xc000184710 0xc000184770] [0x935700 0x935700] 0xc00143b200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:09:52.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:09:52.665: INFO: rc: 1 Feb 13 12:09:52.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a4e0 exit status 1 true [0xc0001847a8 0xc000184860 0xc000184918] [0xc0001847a8 0xc000184860 0xc000184918] [0xc000184808 0xc000184908] [0x935700 0x935700] 0xc00143b860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:10:02.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:10:02.856: INFO: rc: 1 Feb 13 12:10:02.856: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa810 exit status 1 true [0xc000fea0e0 0xc000fea0f8 0xc000fea110] [0xc000fea0e0 0xc000fea0f8 0xc000fea110] [0xc000fea0f0 0xc000fea108] [0x935700 0x935700] 0xc000bc64e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:10:12.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:10:13.016: INFO: rc: 1 Feb 13 12:10:13.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017fa930 exit status 1 true [0xc000fea118 0xc000fea130 0xc000fea148] [0xc000fea118 0xc000fea130 0xc000fea148] [0xc000fea128 0xc000fea140] [0x935700 0x935700] 0xc000bc7aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:10:23.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:10:23.204: INFO: rc: 1 Feb 13 12:10:23.205: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a630 exit status 1 true [0xc000184948 0xc000184a70 0xc000184b60] [0xc000184948 0xc000184a70 0xc000184b60] [0xc000184a38 0xc000184ac8] [0x935700 0x935700] 0xc00143bf80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:10:33.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:10:33.390: INFO: rc: 1 Feb 13 12:10:33.390: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0012bc0f0 exit status 1 true [0xc0010cc008 0xc0010cc020 0xc0010cc038] [0xc0010cc008 0xc0010cc020 0xc0010cc038] [0xc0010cc018 0xc0010cc030] [0x935700 0x935700] 0xc00175ef00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:10:43.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:10:43.558: INFO: rc: 1 Feb 13 12:10:43.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a150 exit status 1 true [0xc000184000 0xc000184298 0xc000184368] [0xc000184000 0xc000184298 0xc000184368] [0xc000184220 0xc000184348] [0x935700 0x935700] 0xc00143a2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:10:53.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:10:54.549: INFO: rc: 1 Feb 13 12:10:54.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a2d0 exit status 1 true [0xc000184510 0xc0001845f0 0xc000184630] [0xc000184510 0xc0001845f0 0xc000184630] [0xc0001845c0 0xc000184608] [0x935700 0x935700] 0xc00143b080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:11:04.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:11:04.694: INFO: rc: 1 Feb 13 12:11:04.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a450 exit status 1 true [0xc000184680 0xc000184710 0xc000184770] [0xc000184680 0xc000184710 0xc000184770] [0xc000184700 0xc000184748] [0x935700 0x935700] 0xc00143b560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:11:14.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:11:14.903: INFO: rc: 1 Feb 13 12:11:14.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022f2150 exit status 1 true [0xc0010cc040 0xc0010cc058 0xc0010cc070] [0xc0010cc040 0xc0010cc058 0xc0010cc070] [0xc0010cc050 0xc0010cc068] [0x935700 0x935700] 0xc0013d4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:11:24.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:11:25.048: INFO: rc: 1 Feb 13 12:11:25.048: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00207a5a0 exit status 1 true [0xc000184778 0xc000184808 0xc000184908] [0xc000184778 0xc000184808 0xc000184908] [0xc0001847f8 0xc000184880] [0x935700 0x935700] 0xc00143be00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 12:11:35.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4jhhj ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 13 12:11:35.225: INFO: rc: 1 Feb 13 12:11:35.225: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 13 12:11:35.225: INFO: Scaling statefulset ss to 0 Feb 13 12:11:35.241: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 13 12:11:35.245: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4jhhj Feb 13 12:11:35.248: INFO: Scaling statefulset ss to 0 Feb 13 12:11:35.257: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 12:11:35.259: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:11:35.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4jhhj" for this suite. Feb 13 12:11:43.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:11:43.339: INFO: namespace: e2e-tests-statefulset-4jhhj, resource: bindings, ignored listing per whitelist Feb 13 12:11:43.487: INFO: namespace e2e-tests-statefulset-4jhhj deletion completed in 8.203456646s • [SLOW TEST:393.626 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:11:43.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dvkl8 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 13 12:11:43.786: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 13 12:12:19.989: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-dvkl8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 13 12:12:19.989: INFO: >>> kubeConfig: /root/.kube/config I0213 12:12:20.060538 8 log.go:172] (0xc0007ea9a0) (0xc000ee40a0) Create stream I0213 12:12:20.060746 8 log.go:172] (0xc0007ea9a0) (0xc000ee40a0) Stream added, broadcasting: 1 I0213 12:12:20.067101 8 log.go:172] (0xc0007ea9a0) Reply frame received for 1 I0213 12:12:20.067140 8 log.go:172] (0xc0007ea9a0) (0xc00151f220) Create stream I0213 12:12:20.067152 8 log.go:172] (0xc0007ea9a0) (0xc00151f220) Stream added, broadcasting: 3 I0213 12:12:20.068368 8 log.go:172] (0xc0007ea9a0) Reply frame received for 3 I0213 12:12:20.068401 8 log.go:172] (0xc0007ea9a0) (0xc002433220) Create stream I0213 12:12:20.068412 8 log.go:172] (0xc0007ea9a0) (0xc002433220) Stream added, broadcasting: 5 I0213 12:12:20.069333 8 log.go:172] (0xc0007ea9a0) Reply frame received for 5 I0213 12:12:21.290743 8 log.go:172] (0xc0007ea9a0) Data frame received for 3 I0213 12:12:21.290856 8 log.go:172] (0xc00151f220) (3) Data frame handling I0213 12:12:21.290911 8 log.go:172] (0xc00151f220) (3) Data frame sent I0213 12:12:21.469661 8 log.go:172] (0xc0007ea9a0) Data frame received for 1 I0213 12:12:21.469874 8 log.go:172] (0xc000ee40a0) (1) Data frame handling I0213 12:12:21.470005 8 log.go:172] (0xc000ee40a0) (1) Data frame sent I0213 12:12:21.470185 8 log.go:172] (0xc0007ea9a0) (0xc000ee40a0) Stream removed, broadcasting: 1 I0213 12:12:21.471913 8 log.go:172] (0xc0007ea9a0) (0xc00151f220) Stream removed, broadcasting: 3 I0213 12:12:21.472296 8 log.go:172] (0xc0007ea9a0) (0xc002433220) Stream removed, broadcasting: 5 I0213 12:12:21.472392 8 log.go:172] (0xc0007ea9a0) (0xc000ee40a0) Stream removed, broadcasting: 1 I0213 12:12:21.472415 8 log.go:172] (0xc0007ea9a0) (0xc00151f220) Stream removed, broadcasting: 3 I0213 12:12:21.472435 8 log.go:172] (0xc0007ea9a0) (0xc002433220) Stream removed, broadcasting: 5 Feb 13 12:12:21.472: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:12:21.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0213 12:12:21.474430 8 log.go:172] (0xc0007ea9a0) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-dvkl8" for this suite. Feb 13 12:12:45.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:12:45.664: INFO: namespace: e2e-tests-pod-network-test-dvkl8, resource: bindings, ignored listing per whitelist Feb 13 12:12:45.703: INFO: namespace e2e-tests-pod-network-test-dvkl8 deletion completed in 24.200476484s • [SLOW TEST:62.215 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:12:45.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 13 12:12:45.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-k48gr' Feb 13 12:12:46.116: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 12:12:46.117: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 13 12:12:48.236: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-gvbqc] Feb 13 12:12:48.236: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-gvbqc" in namespace "e2e-tests-kubectl-k48gr" to be "running and ready" Feb 13 12:12:48.243: INFO: Pod "e2e-test-nginx-rc-gvbqc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.596925ms Feb 13 12:12:50.259: INFO: Pod "e2e-test-nginx-rc-gvbqc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02327376s Feb 13 12:12:52.617: INFO: Pod "e2e-test-nginx-rc-gvbqc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381072728s Feb 13 12:12:54.629: INFO: Pod "e2e-test-nginx-rc-gvbqc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393132709s Feb 13 12:12:56.642: INFO: Pod "e2e-test-nginx-rc-gvbqc": Phase="Running", Reason="", readiness=true. Elapsed: 8.406192674s Feb 13 12:12:56.642: INFO: Pod "e2e-test-nginx-rc-gvbqc" satisfied condition "running and ready" Feb 13 12:12:56.642: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-gvbqc] Feb 13 12:12:56.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k48gr' Feb 13 12:12:56.892: INFO: stderr: "" Feb 13 12:12:56.892: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Feb 13 12:12:56.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k48gr' Feb 13 12:12:57.083: INFO: stderr: "" Feb 13 12:12:57.083: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:12:57.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k48gr" for this suite. Feb 13 12:13:21.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:13:21.391: INFO: namespace: e2e-tests-kubectl-k48gr, resource: bindings, ignored listing per whitelist Feb 13 12:13:21.409: INFO: namespace e2e-tests-kubectl-k48gr deletion completed in 24.252355449s • [SLOW TEST:35.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:13:21.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 13 12:13:21.687: INFO: Waiting up to 5m0s for pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-xhnbq" to be "success or failure" Feb 13 12:13:21.705: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.083092ms Feb 13 12:13:24.180: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492999212s Feb 13 12:13:26.196: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508707659s Feb 13 12:13:28.594: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.906944392s Feb 13 12:13:30.636: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949074999s Feb 13 12:13:32.651: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.96443712s STEP: Saw pod success Feb 13 12:13:32.652: INFO: Pod "pod-3a133594-4e5a-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:13:32.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3a133594-4e5a-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 12:13:32.773: INFO: Waiting for pod pod-3a133594-4e5a-11ea-aba9-0242ac110007 to disappear Feb 13 12:13:32.798: INFO: Pod pod-3a133594-4e5a-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:13:32.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xhnbq" for this suite. Feb 13 12:13:38.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:13:39.182: INFO: namespace: e2e-tests-emptydir-xhnbq, resource: bindings, ignored listing per whitelist Feb 13 12:13:39.287: INFO: namespace e2e-tests-emptydir-xhnbq deletion completed in 6.400654234s • [SLOW TEST:17.878 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:13:39.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 12:13:39.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-zl6kk" to be "success or failure" Feb 13 12:13:39.619: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 41.867791ms Feb 13 12:13:41.631: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054415272s Feb 13 12:13:43.648: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071136037s Feb 13 12:13:45.831: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254522248s Feb 13 12:13:47.844: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266837004s Feb 13 12:13:49.857: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.280195201s STEP: Saw pod success Feb 13 12:13:49.857: INFO: Pod "downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:13:49.863: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 12:13:50.590: INFO: Waiting for pod downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007 to disappear Feb 13 12:13:50.654: INFO: Pod downwardapi-volume-44bf3984-4e5a-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:13:50.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zl6kk" for this suite. Feb 13 12:13:56.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:13:57.061: INFO: namespace: e2e-tests-downward-api-zl6kk, resource: bindings, ignored listing per whitelist Feb 13 12:13:57.105: INFO: namespace e2e-tests-downward-api-zl6kk deletion completed in 6.339672267s • [SLOW TEST:17.817 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:13:57.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-tnkxk/secret-test-4f678bf5-4e5a-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 12:13:57.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-tnkxk" to be "success or failure" Feb 13 12:13:57.769: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 146.66017ms Feb 13 12:13:59.781: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15939109s Feb 13 12:14:01.815: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193289348s Feb 13 12:14:04.430: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.807976863s Feb 13 12:14:07.064: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.442146411s Feb 13 12:14:09.083: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.461075133s STEP: Saw pod success Feb 13 12:14:09.083: INFO: Pod "pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:14:09.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007 container env-test: STEP: delete the pod Feb 13 12:14:09.764: INFO: Waiting for pod pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007 to disappear Feb 13 12:14:09.915: INFO: Pod pod-configmaps-4f77accd-4e5a-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:14:09.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tnkxk" for this suite. Feb 13 12:14:15.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:14:16.117: INFO: namespace: e2e-tests-secrets-tnkxk, resource: bindings, ignored listing per whitelist Feb 13 12:14:16.275: INFO: namespace e2e-tests-secrets-tnkxk deletion completed in 6.348207663s • [SLOW TEST:19.170 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:14:16.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 13 12:14:16.531: INFO: Waiting up to 5m0s for pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007" in namespace "e2e-tests-containers-wnvtl" to be "success or failure" Feb 13 12:14:16.592: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 61.072431ms Feb 13 12:14:18.632: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100629807s Feb 13 12:14:20.649: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117594899s Feb 13 12:14:23.452: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920548958s Feb 13 12:14:25.481: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949549819s Feb 13 12:14:27.490: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.959008762s STEP: Saw pod success Feb 13 12:14:27.490: INFO: Pod "client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:14:27.495: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 12:14:28.378: INFO: Waiting for pod client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007 to disappear Feb 13 12:14:28.694: INFO: Pod client-containers-5abad8e1-4e5a-11ea-aba9-0242ac110007 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:14:28.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-wnvtl" for this suite. Feb 13 12:14:36.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:14:36.901: INFO: namespace: e2e-tests-containers-wnvtl, resource: bindings, ignored listing per whitelist Feb 13 12:14:37.183: INFO: namespace e2e-tests-containers-wnvtl deletion completed in 8.457972066s • [SLOW TEST:20.908 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:14:37.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-6p2g6 I0213 12:14:37.450890 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-6p2g6, replica count: 1 I0213 12:14:38.501483 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:39.501842 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:40.502346 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:41.502764 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:42.503290 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:43.503715 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:44.504115 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:45.504475 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:46.504858 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 12:14:47.505224 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 13 12:14:47.860: INFO: Created: latency-svc-s25rl Feb 13 12:14:47.925: INFO: Got endpoints: latency-svc-s25rl [319.723857ms] Feb 13 12:14:48.075: INFO: Created: latency-svc-wfb6d Feb 13 12:14:48.315: INFO: Got endpoints: latency-svc-wfb6d [389.145731ms] Feb 13 12:14:48.318: INFO: Created: latency-svc-8cb2n Feb 13 12:14:48.348: INFO: Got endpoints: latency-svc-8cb2n [422.19047ms] Feb 13 12:14:48.550: INFO: Created: latency-svc-jlhls Feb 13 12:14:48.576: INFO: Got endpoints: latency-svc-jlhls [649.691445ms] Feb 13 12:14:48.676: INFO: Created: latency-svc-6jx6g Feb 13 12:14:48.694: INFO: Got endpoints: latency-svc-6jx6g [767.992743ms] Feb 13 12:14:48.755: INFO: Created: latency-svc-8n8db Feb 13 12:14:48.934: INFO: Got endpoints: latency-svc-8n8db [1.00796611s] Feb 13 12:14:48.953: INFO: Created: latency-svc-q6g8t Feb 13 12:14:48.978: INFO: Got endpoints: latency-svc-q6g8t [1.052064553s] Feb 13 12:14:49.033: INFO: Created: latency-svc-cgcvh Feb 13 12:14:49.164: INFO: Got endpoints: latency-svc-cgcvh [1.237725943s] Feb 13 12:14:49.207: INFO: Created: latency-svc-mhx5f Feb 13 12:14:49.220: INFO: Got endpoints: latency-svc-mhx5f [1.294010736s] Feb 13 12:14:49.408: INFO: Created: latency-svc-ggh76 Feb 13 12:14:49.443: INFO: Got endpoints: latency-svc-ggh76 [1.516961344s] Feb 13 12:14:49.663: INFO: Created: latency-svc-2bpz9 Feb 13 12:14:49.713: INFO: Got endpoints: latency-svc-2bpz9 [1.787329004s] Feb 13 12:14:49.847: INFO: Created: latency-svc-cnbgq Feb 13 12:14:49.984: INFO: Got endpoints: latency-svc-cnbgq [2.058997351s] Feb 13 12:14:50.023: INFO: Created: latency-svc-t4lgp Feb 13 12:14:50.200: INFO: Created: latency-svc-w462g Feb 13 12:14:50.215: INFO: Got endpoints: latency-svc-t4lgp [2.288991894s] Feb 13 12:14:50.224: INFO: Got endpoints: latency-svc-w462g [2.298749473s] Feb 13 12:14:50.265: INFO: Created: latency-svc-ds2c5 Feb 13 12:14:50.376: INFO: Got endpoints: latency-svc-ds2c5 [2.450513676s] Feb 13 12:14:50.402: INFO: Created: latency-svc-g6mcw Feb 13 12:14:50.405: INFO: Got endpoints: latency-svc-g6mcw [2.47898471s] Feb 13 12:14:50.470: INFO: Created: latency-svc-h6hr4 Feb 13 12:14:50.603: INFO: Got endpoints: latency-svc-h6hr4 [2.287992833s] Feb 13 12:14:50.694: INFO: Created: latency-svc-xx4vq Feb 13 12:14:50.846: INFO: Got endpoints: latency-svc-xx4vq [2.49772642s] Feb 13 12:14:50.873: INFO: Created: latency-svc-6965f Feb 13 12:14:50.883: INFO: Got endpoints: latency-svc-6965f [2.306945346s] Feb 13 12:14:51.122: INFO: Created: latency-svc-28v7b Feb 13 12:14:51.143: INFO: Got endpoints: latency-svc-28v7b [2.448902231s] Feb 13 12:14:51.148: INFO: Created: latency-svc-2spp4 Feb 13 12:14:51.150: INFO: Got endpoints: latency-svc-2spp4 [2.215809213s] Feb 13 12:14:51.200: INFO: Created: latency-svc-w59xw Feb 13 12:14:51.330: INFO: Got endpoints: latency-svc-w59xw [2.351638638s] Feb 13 12:14:51.363: INFO: Created: latency-svc-gh4gb Feb 13 12:14:51.367: INFO: Got endpoints: latency-svc-gh4gb [2.203715006s] Feb 13 12:14:51.420: INFO: Created: latency-svc-7ctfg Feb 13 12:14:51.540: INFO: Got endpoints: latency-svc-7ctfg [2.319799019s] Feb 13 12:14:51.570: INFO: Created: latency-svc-zt2n5 Feb 13 12:14:51.625: INFO: Created: latency-svc-nrjz9 Feb 13 12:14:51.628: INFO: Got endpoints: latency-svc-zt2n5 [2.185115569s] Feb 13 12:14:51.774: INFO: Got endpoints: latency-svc-nrjz9 [2.060401493s] Feb 13 12:14:51.847: INFO: Created: latency-svc-gtvds Feb 13 12:14:52.054: INFO: Got endpoints: latency-svc-gtvds [2.069075349s] Feb 13 12:14:52.074: INFO: Created: latency-svc-q6t5t Feb 13 12:14:52.088: INFO: Got endpoints: latency-svc-q6t5t [1.872955783s] Feb 13 12:14:52.266: INFO: Created: latency-svc-ktm6b Feb 13 12:14:52.279: INFO: Got endpoints: latency-svc-ktm6b [2.055055179s] Feb 13 12:14:52.352: INFO: Created: latency-svc-bbwxd Feb 13 12:14:52.504: INFO: Got endpoints: latency-svc-bbwxd [2.128272996s] Feb 13 12:14:52.565: INFO: Created: latency-svc-smtxs Feb 13 12:14:52.907: INFO: Got endpoints: latency-svc-smtxs [2.502355387s] Feb 13 12:14:52.937: INFO: Created: latency-svc-vgsr4 Feb 13 12:14:52.973: INFO: Got endpoints: latency-svc-vgsr4 [2.369904533s] Feb 13 12:14:53.138: INFO: Created: latency-svc-gkvhp Feb 13 12:14:53.167: INFO: Got endpoints: latency-svc-gkvhp [2.320787692s] Feb 13 12:14:53.337: INFO: Created: latency-svc-wcq9p Feb 13 12:14:53.353: INFO: Got endpoints: latency-svc-wcq9p [2.470715409s] Feb 13 12:14:53.417: INFO: Created: latency-svc-6bxq7 Feb 13 12:14:53.507: INFO: Got endpoints: latency-svc-6bxq7 [2.363408692s] Feb 13 12:14:53.540: INFO: Created: latency-svc-rndbb Feb 13 12:14:53.576: INFO: Got endpoints: latency-svc-rndbb [2.426345474s] Feb 13 12:14:53.713: INFO: Created: latency-svc-zjdkx Feb 13 12:14:53.750: INFO: Got endpoints: latency-svc-zjdkx [2.420692697s] Feb 13 12:14:54.011: INFO: Created: latency-svc-44hzs Feb 13 12:14:54.012: INFO: Got endpoints: latency-svc-44hzs [2.644143376s] Feb 13 12:14:54.246: INFO: Created: latency-svc-grjqd Feb 13 12:14:54.312: INFO: Got endpoints: latency-svc-grjqd [2.771945886s] Feb 13 12:14:54.451: INFO: Created: latency-svc-sdwwr Feb 13 12:14:54.469: INFO: Got endpoints: latency-svc-sdwwr [2.840661165s] Feb 13 12:14:54.659: INFO: Created: latency-svc-qvgb8 Feb 13 12:14:54.671: INFO: Got endpoints: latency-svc-qvgb8 [2.89647425s] Feb 13 12:14:54.808: INFO: Created: latency-svc-nmx6f Feb 13 12:14:54.843: INFO: Got endpoints: latency-svc-nmx6f [2.789110808s] Feb 13 12:14:55.008: INFO: Created: latency-svc-xmqs7 Feb 13 12:14:55.071: INFO: Got endpoints: latency-svc-xmqs7 [2.982452463s] Feb 13 12:14:55.275: INFO: Created: latency-svc-nrg5p Feb 13 12:14:55.293: INFO: Got endpoints: latency-svc-nrg5p [3.013890251s] Feb 13 12:14:55.346: INFO: Created: latency-svc-qqqnf Feb 13 12:14:55.440: INFO: Got endpoints: latency-svc-qqqnf [2.936022569s] Feb 13 12:14:55.469: INFO: Created: latency-svc-5qkc4 Feb 13 12:14:55.488: INFO: Got endpoints: latency-svc-5qkc4 [2.580051841s] Feb 13 12:14:55.633: INFO: Created: latency-svc-9ms8q Feb 13 12:14:55.648: INFO: Got endpoints: latency-svc-9ms8q [2.674189946s] Feb 13 12:14:55.781: INFO: Created: latency-svc-l8w9z Feb 13 12:14:55.804: INFO: Got endpoints: latency-svc-l8w9z [2.636860653s] Feb 13 12:14:55.878: INFO: Created: latency-svc-2fthq Feb 13 12:14:56.021: INFO: Got endpoints: latency-svc-2fthq [2.667448091s] Feb 13 12:14:56.058: INFO: Created: latency-svc-2mnhn Feb 13 12:14:56.070: INFO: Got endpoints: latency-svc-2mnhn [2.563298111s] Feb 13 12:14:56.309: INFO: Created: latency-svc-ffwpl Feb 13 12:14:56.371: INFO: Got endpoints: latency-svc-ffwpl [2.794846521s] Feb 13 12:14:56.523: INFO: Created: latency-svc-vtz49 Feb 13 12:14:56.554: INFO: Got endpoints: latency-svc-vtz49 [2.803876685s] Feb 13 12:14:56.685: INFO: Created: latency-svc-z8b8v Feb 13 12:14:56.721: INFO: Got endpoints: latency-svc-z8b8v [2.709607502s] Feb 13 12:14:56.895: INFO: Created: latency-svc-5fgsw Feb 13 12:14:56.915: INFO: Got endpoints: latency-svc-5fgsw [2.602734789s] Feb 13 12:14:56.956: INFO: Created: latency-svc-2qw8v Feb 13 12:14:57.077: INFO: Got endpoints: latency-svc-2qw8v [2.608232052s] Feb 13 12:14:57.128: INFO: Created: latency-svc-pcf9h Feb 13 12:14:57.175: INFO: Got endpoints: latency-svc-pcf9h [2.503696805s] Feb 13 12:14:57.298: INFO: Created: latency-svc-lddv5 Feb 13 12:14:57.311: INFO: Got endpoints: latency-svc-lddv5 [233.31433ms] Feb 13 12:14:57.480: INFO: Created: latency-svc-4fm2n Feb 13 12:14:57.487: INFO: Got endpoints: latency-svc-4fm2n [2.643361651s] Feb 13 12:14:57.560: INFO: Created: latency-svc-g2x2q Feb 13 12:14:57.720: INFO: Got endpoints: latency-svc-g2x2q [2.649448844s] Feb 13 12:14:57.753: INFO: Created: latency-svc-9gc7j Feb 13 12:14:57.791: INFO: Got endpoints: latency-svc-9gc7j [2.497180048s] Feb 13 12:14:57.911: INFO: Created: latency-svc-5frjp Feb 13 12:14:58.207: INFO: Got endpoints: latency-svc-5frjp [2.766829938s] Feb 13 12:14:58.212: INFO: Created: latency-svc-gtwlj Feb 13 12:14:58.226: INFO: Got endpoints: latency-svc-gtwlj [2.738200544s] Feb 13 12:14:58.396: INFO: Created: latency-svc-kg6q8 Feb 13 12:14:58.430: INFO: Got endpoints: latency-svc-kg6q8 [2.781820136s] Feb 13 12:14:58.516: INFO: Created: latency-svc-qm5mv Feb 13 12:14:58.613: INFO: Got endpoints: latency-svc-qm5mv [2.808578822s] Feb 13 12:14:58.658: INFO: Created: latency-svc-bwhl2 Feb 13 12:14:58.709: INFO: Got endpoints: latency-svc-bwhl2 [2.688038915s] Feb 13 12:14:58.855: INFO: Created: latency-svc-66c92 Feb 13 12:14:58.880: INFO: Got endpoints: latency-svc-66c92 [2.809532911s] Feb 13 12:14:59.024: INFO: Created: latency-svc-gj4mw Feb 13 12:14:59.043: INFO: Got endpoints: latency-svc-gj4mw [2.67151531s] Feb 13 12:14:59.090: INFO: Created: latency-svc-ngmfp Feb 13 12:14:59.106: INFO: Got endpoints: latency-svc-ngmfp [2.551617342s] Feb 13 12:14:59.265: INFO: Created: latency-svc-tg7zc Feb 13 12:14:59.330: INFO: Got endpoints: latency-svc-tg7zc [2.608559734s] Feb 13 12:14:59.419: INFO: Created: latency-svc-m8547 Feb 13 12:14:59.442: INFO: Got endpoints: latency-svc-m8547 [2.527227568s] Feb 13 12:14:59.487: INFO: Created: latency-svc-r8gqb Feb 13 12:14:59.509: INFO: Got endpoints: latency-svc-r8gqb [2.334143677s] Feb 13 12:14:59.621: INFO: Created: latency-svc-296ww Feb 13 12:14:59.630: INFO: Got endpoints: latency-svc-296ww [2.318445042s] Feb 13 12:14:59.866: INFO: Created: latency-svc-sfnn8 Feb 13 12:14:59.868: INFO: Got endpoints: latency-svc-sfnn8 [2.380585644s] Feb 13 12:15:00.095: INFO: Created: latency-svc-dl6ph Feb 13 12:15:00.122: INFO: Got endpoints: latency-svc-dl6ph [2.401579421s] Feb 13 12:15:00.399: INFO: Created: latency-svc-vkh8k Feb 13 12:15:00.417: INFO: Got endpoints: latency-svc-vkh8k [2.62570402s] Feb 13 12:15:00.577: INFO: Created: latency-svc-bcnvp Feb 13 12:15:00.588: INFO: Got endpoints: latency-svc-bcnvp [2.380571987s] Feb 13 12:15:00.604: INFO: Created: latency-svc-4sbhl Feb 13 12:15:00.637: INFO: Got endpoints: latency-svc-4sbhl [2.411042196s] Feb 13 12:15:00.775: INFO: Created: latency-svc-g4kzc Feb 13 12:15:00.804: INFO: Got endpoints: latency-svc-g4kzc [2.373665874s] Feb 13 12:15:00.944: INFO: Created: latency-svc-g9shg Feb 13 12:15:01.003: INFO: Got endpoints: latency-svc-g9shg [2.389634077s] Feb 13 12:15:01.227: INFO: Created: latency-svc-jzh4h Feb 13 12:15:01.343: INFO: Got endpoints: latency-svc-jzh4h [2.634003488s] Feb 13 12:15:01.618: INFO: Created: latency-svc-pjrfr Feb 13 12:15:01.845: INFO: Got endpoints: latency-svc-pjrfr [2.965044162s] Feb 13 12:15:01.915: INFO: Created: latency-svc-c7lb6 Feb 13 12:15:01.917: INFO: Got endpoints: latency-svc-c7lb6 [2.873464879s] Feb 13 12:15:02.324: INFO: Created: latency-svc-8snnj Feb 13 12:15:02.493: INFO: Got endpoints: latency-svc-8snnj [3.386625405s] Feb 13 12:15:02.558: INFO: Created: latency-svc-bmjmx Feb 13 12:15:02.782: INFO: Got endpoints: latency-svc-bmjmx [3.451318902s] Feb 13 12:15:02.809: INFO: Created: latency-svc-w8d8l Feb 13 12:15:02.815: INFO: Got endpoints: latency-svc-w8d8l [3.373300307s] Feb 13 12:15:03.018: INFO: Created: latency-svc-d7vmb Feb 13 12:15:03.048: INFO: Got endpoints: latency-svc-d7vmb [3.5388605s] Feb 13 12:15:03.273: INFO: Created: latency-svc-f8pd9 Feb 13 12:15:03.326: INFO: Got endpoints: latency-svc-f8pd9 [3.696862244s] Feb 13 12:15:03.486: INFO: Created: latency-svc-nq24m Feb 13 12:15:03.569: INFO: Created: latency-svc-f7gxr Feb 13 12:15:03.572: INFO: Got endpoints: latency-svc-nq24m [3.70404158s] Feb 13 12:15:03.681: INFO: Got endpoints: latency-svc-f7gxr [3.558602711s] Feb 13 12:15:03.697: INFO: Created: latency-svc-xdxdj Feb 13 12:15:03.723: INFO: Got endpoints: latency-svc-xdxdj [3.305971766s] Feb 13 12:15:03.752: INFO: Created: latency-svc-686j2 Feb 13 12:15:03.838: INFO: Got endpoints: latency-svc-686j2 [3.24954242s] Feb 13 12:15:03.880: INFO: Created: latency-svc-9fbrb Feb 13 12:15:03.892: INFO: Got endpoints: latency-svc-9fbrb [3.254829327s] Feb 13 12:15:03.945: INFO: Created: latency-svc-h845s Feb 13 12:15:04.039: INFO: Got endpoints: latency-svc-h845s [3.23537027s] Feb 13 12:15:04.061: INFO: Created: latency-svc-fv4ww Feb 13 12:15:04.083: INFO: Got endpoints: latency-svc-fv4ww [3.080226232s] Feb 13 12:15:04.324: INFO: Created: latency-svc-86gv4 Feb 13 12:15:04.337: INFO: Got endpoints: latency-svc-86gv4 [2.993676788s] Feb 13 12:15:04.575: INFO: Created: latency-svc-sszl2 Feb 13 12:15:04.659: INFO: Created: latency-svc-gtkgc Feb 13 12:15:04.829: INFO: Got endpoints: latency-svc-sszl2 [2.983913672s] Feb 13 12:15:04.846: INFO: Created: latency-svc-qj8nw Feb 13 12:15:04.863: INFO: Got endpoints: latency-svc-gtkgc [2.946059257s] Feb 13 12:15:04.863: INFO: Got endpoints: latency-svc-qj8nw [2.36952097s] Feb 13 12:15:05.020: INFO: Created: latency-svc-q86g4 Feb 13 12:15:05.036: INFO: Got endpoints: latency-svc-q86g4 [2.25467448s] Feb 13 12:15:05.083: INFO: Created: latency-svc-8pgn8 Feb 13 12:15:05.222: INFO: Got endpoints: latency-svc-8pgn8 [2.406760132s] Feb 13 12:15:05.242: INFO: Created: latency-svc-96dnm Feb 13 12:15:05.276: INFO: Got endpoints: latency-svc-96dnm [2.227936692s] Feb 13 12:15:05.328: INFO: Created: latency-svc-bsr5z Feb 13 12:15:05.440: INFO: Got endpoints: latency-svc-bsr5z [2.112743063s] Feb 13 12:15:05.466: INFO: Created: latency-svc-5gx8q Feb 13 12:15:05.489: INFO: Got endpoints: latency-svc-5gx8q [1.917189806s] Feb 13 12:15:05.609: INFO: Created: latency-svc-qqhzl Feb 13 12:15:05.628: INFO: Got endpoints: latency-svc-qqhzl [1.947472978s] Feb 13 12:15:05.711: INFO: Created: latency-svc-w6hpw Feb 13 12:15:05.853: INFO: Got endpoints: latency-svc-w6hpw [2.129450609s] Feb 13 12:15:05.881: INFO: Created: latency-svc-jp2l6 Feb 13 12:15:05.925: INFO: Got endpoints: latency-svc-jp2l6 [2.086587717s] Feb 13 12:15:06.116: INFO: Created: latency-svc-59v5z Feb 13 12:15:06.133: INFO: Got endpoints: latency-svc-59v5z [2.241104909s] Feb 13 12:15:06.328: INFO: Created: latency-svc-g6kww Feb 13 12:15:06.382: INFO: Got endpoints: latency-svc-g6kww [2.343065077s] Feb 13 12:15:06.385: INFO: Created: latency-svc-4s927 Feb 13 12:15:06.511: INFO: Got endpoints: latency-svc-4s927 [2.427239111s] Feb 13 12:15:06.754: INFO: Created: latency-svc-5zdvq Feb 13 12:15:06.790: INFO: Got endpoints: latency-svc-5zdvq [2.452415369s] Feb 13 12:15:06.843: INFO: Created: latency-svc-zgqd9 Feb 13 12:15:06.932: INFO: Got endpoints: latency-svc-zgqd9 [2.102851513s] Feb 13 12:15:06.962: INFO: Created: latency-svc-vwzms Feb 13 12:15:07.038: INFO: Got endpoints: latency-svc-vwzms [2.174718588s] Feb 13 12:15:07.056: INFO: Created: latency-svc-bnrsz Feb 13 12:15:07.167: INFO: Got endpoints: latency-svc-bnrsz [2.303820831s] Feb 13 12:15:07.231: INFO: Created: latency-svc-xkc2c Feb 13 12:15:07.252: INFO: Got endpoints: latency-svc-xkc2c [2.215748495s] Feb 13 12:15:07.355: INFO: Created: latency-svc-v4n4h Feb 13 12:15:07.373: INFO: Got endpoints: latency-svc-v4n4h [2.150701672s] Feb 13 12:15:07.536: INFO: Created: latency-svc-2xvk7 Feb 13 12:15:07.552: INFO: Got endpoints: latency-svc-2xvk7 [2.276590766s] Feb 13 12:15:07.612: INFO: Created: latency-svc-bflcx Feb 13 12:15:07.729: INFO: Got endpoints: latency-svc-bflcx [2.288851906s] Feb 13 12:15:07.937: INFO: Created: latency-svc-7t6d2 Feb 13 12:15:07.963: INFO: Got endpoints: latency-svc-7t6d2 [2.473510109s] Feb 13 12:15:08.033: INFO: Created: latency-svc-4vlwk Feb 13 12:15:08.187: INFO: Created: latency-svc-99nkx Feb 13 12:15:08.317: INFO: Got endpoints: latency-svc-4vlwk [2.688226575s] Feb 13 12:15:08.345: INFO: Got endpoints: latency-svc-99nkx [2.492775041s] Feb 13 12:15:08.378: INFO: Created: latency-svc-zftvp Feb 13 12:15:08.418: INFO: Got endpoints: latency-svc-zftvp [2.492775545s] Feb 13 12:15:08.424: INFO: Created: latency-svc-c964s Feb 13 12:15:08.590: INFO: Got endpoints: latency-svc-c964s [2.456647298s] Feb 13 12:15:08.650: INFO: Created: latency-svc-sztw8 Feb 13 12:15:08.777: INFO: Got endpoints: latency-svc-sztw8 [2.394958039s] Feb 13 12:15:08.815: INFO: Created: latency-svc-wqn68 Feb 13 12:15:08.830: INFO: Got endpoints: latency-svc-wqn68 [2.31920943s] Feb 13 12:15:08.958: INFO: Created: latency-svc-4lcdc Feb 13 12:15:08.966: INFO: Got endpoints: latency-svc-4lcdc [2.175328059s] Feb 13 12:15:09.038: INFO: Created: latency-svc-nd78n Feb 13 12:15:09.199: INFO: Got endpoints: latency-svc-nd78n [2.267270381s] Feb 13 12:15:09.227: INFO: Created: latency-svc-mb766 Feb 13 12:15:09.255: INFO: Got endpoints: latency-svc-mb766 [2.217211498s] Feb 13 12:15:09.380: INFO: Created: latency-svc-946qv Feb 13 12:15:09.395: INFO: Got endpoints: latency-svc-946qv [2.228424732s] Feb 13 12:15:09.397: INFO: Created: latency-svc-pgr65 Feb 13 12:15:09.414: INFO: Got endpoints: latency-svc-pgr65 [2.161655173s] Feb 13 12:15:09.457: INFO: Created: latency-svc-kmlp8 Feb 13 12:15:09.558: INFO: Got endpoints: latency-svc-kmlp8 [2.184432602s] Feb 13 12:15:09.595: INFO: Created: latency-svc-88f7q Feb 13 12:15:09.622: INFO: Got endpoints: latency-svc-88f7q [2.069023584s] Feb 13 12:15:09.749: INFO: Created: latency-svc-w5t9b Feb 13 12:15:09.765: INFO: Got endpoints: latency-svc-w5t9b [2.036177735s] Feb 13 12:15:09.878: INFO: Created: latency-svc-4n49c Feb 13 12:15:09.939: INFO: Got endpoints: latency-svc-4n49c [1.975623016s] Feb 13 12:15:10.073: INFO: Created: latency-svc-fgfs8 Feb 13 12:15:10.112: INFO: Got endpoints: latency-svc-fgfs8 [1.794229234s] Feb 13 12:15:10.160: INFO: Created: latency-svc-sqk88 Feb 13 12:15:10.297: INFO: Got endpoints: latency-svc-sqk88 [1.951508246s] Feb 13 12:15:10.358: INFO: Created: latency-svc-db2vd Feb 13 12:15:10.368: INFO: Got endpoints: latency-svc-db2vd [1.950349076s] Feb 13 12:15:10.499: INFO: Created: latency-svc-n9jnk Feb 13 12:15:10.518: INFO: Got endpoints: latency-svc-n9jnk [1.927792531s] Feb 13 12:15:10.580: INFO: Created: latency-svc-vt2nk Feb 13 12:15:10.742: INFO: Got endpoints: latency-svc-vt2nk [1.964706652s] Feb 13 12:15:10.810: INFO: Created: latency-svc-sgttl Feb 13 12:15:10.952: INFO: Got endpoints: latency-svc-sgttl [2.122241909s] Feb 13 12:15:10.973: INFO: Created: latency-svc-ncwdn Feb 13 12:15:10.990: INFO: Got endpoints: latency-svc-ncwdn [2.02430726s] Feb 13 12:15:11.189: INFO: Created: latency-svc-7fcb2 Feb 13 12:15:11.203: INFO: Got endpoints: latency-svc-7fcb2 [2.003022102s] Feb 13 12:15:11.280: INFO: Created: latency-svc-whz2p Feb 13 12:15:11.357: INFO: Got endpoints: latency-svc-whz2p [2.101554305s] Feb 13 12:15:11.391: INFO: Created: latency-svc-gmhkm Feb 13 12:15:11.400: INFO: Got endpoints: latency-svc-gmhkm [2.004566425s] Feb 13 12:15:11.466: INFO: Created: latency-svc-dzts6 Feb 13 12:15:11.537: INFO: Got endpoints: latency-svc-dzts6 [2.122493032s] Feb 13 12:15:11.592: INFO: Created: latency-svc-sh2qh Feb 13 12:15:11.609: INFO: Got endpoints: latency-svc-sh2qh [2.05090819s] Feb 13 12:15:11.763: INFO: Created: latency-svc-mzs4d Feb 13 12:15:11.824: INFO: Got endpoints: latency-svc-mzs4d [2.201890798s] Feb 13 12:15:11.978: INFO: Created: latency-svc-d9gqv Feb 13 12:15:12.015: INFO: Got endpoints: latency-svc-d9gqv [2.249553426s] Feb 13 12:15:12.061: INFO: Created: latency-svc-mn8nr Feb 13 12:15:12.241: INFO: Got endpoints: latency-svc-mn8nr [2.302128956s] Feb 13 12:15:12.285: INFO: Created: latency-svc-gjqvc Feb 13 12:15:12.307: INFO: Got endpoints: latency-svc-gjqvc [2.195116666s] Feb 13 12:15:12.433: INFO: Created: latency-svc-bjg24 Feb 13 12:15:12.462: INFO: Got endpoints: latency-svc-bjg24 [2.164969529s] Feb 13 12:15:12.657: INFO: Created: latency-svc-7scgg Feb 13 12:15:12.659: INFO: Got endpoints: latency-svc-7scgg [2.290429902s] Feb 13 12:15:12.725: INFO: Created: latency-svc-gndtm Feb 13 12:15:12.844: INFO: Got endpoints: latency-svc-gndtm [2.326101774s] Feb 13 12:15:12.871: INFO: Created: latency-svc-56g85 Feb 13 12:15:12.906: INFO: Got endpoints: latency-svc-56g85 [2.164147641s] Feb 13 12:15:13.020: INFO: Created: latency-svc-4jpd4 Feb 13 12:15:13.047: INFO: Got endpoints: latency-svc-4jpd4 [2.094709363s] Feb 13 12:15:13.253: INFO: Created: latency-svc-4s29z Feb 13 12:15:13.272: INFO: Got endpoints: latency-svc-4s29z [2.281594923s] Feb 13 12:15:13.355: INFO: Created: latency-svc-tp4s2 Feb 13 12:15:13.422: INFO: Got endpoints: latency-svc-tp4s2 [2.21923193s] Feb 13 12:15:13.457: INFO: Created: latency-svc-qll28 Feb 13 12:15:13.492: INFO: Got endpoints: latency-svc-qll28 [2.134650867s] Feb 13 12:15:13.605: INFO: Created: latency-svc-985fh Feb 13 12:15:13.662: INFO: Got endpoints: latency-svc-985fh [2.262130036s] Feb 13 12:15:13.814: INFO: Created: latency-svc-2p8cw Feb 13 12:15:13.819: INFO: Got endpoints: latency-svc-2p8cw [2.282777482s] Feb 13 12:15:13.996: INFO: Created: latency-svc-9bp8j Feb 13 12:15:14.095: INFO: Created: latency-svc-kxsml Feb 13 12:15:14.186: INFO: Got endpoints: latency-svc-kxsml [2.362675056s] Feb 13 12:15:14.186: INFO: Got endpoints: latency-svc-9bp8j [2.577703654s] Feb 13 12:15:14.238: INFO: Created: latency-svc-c8774 Feb 13 12:15:14.259: INFO: Got endpoints: latency-svc-c8774 [2.244438298s] Feb 13 12:15:14.409: INFO: Created: latency-svc-btlgk Feb 13 12:15:14.615: INFO: Got endpoints: latency-svc-btlgk [2.373240292s] Feb 13 12:15:14.721: INFO: Created: latency-svc-z9jp6 Feb 13 12:15:14.944: INFO: Got endpoints: latency-svc-z9jp6 [2.636365428s] Feb 13 12:15:16.742: INFO: Created: latency-svc-vz2rm Feb 13 12:15:16.925: INFO: Got endpoints: latency-svc-vz2rm [4.461970294s] Feb 13 12:15:16.965: INFO: Created: latency-svc-nb2bh Feb 13 12:15:16.982: INFO: Got endpoints: latency-svc-nb2bh [4.322705655s] Feb 13 12:15:17.109: INFO: Created: latency-svc-t7djs Feb 13 12:15:17.174: INFO: Got endpoints: latency-svc-t7djs [4.329850566s] Feb 13 12:15:17.307: INFO: Created: latency-svc-hflxw Feb 13 12:15:17.337: INFO: Got endpoints: latency-svc-hflxw [4.430636333s] Feb 13 12:15:17.452: INFO: Created: latency-svc-b4c84 Feb 13 12:15:17.478: INFO: Got endpoints: latency-svc-b4c84 [4.430230055s] Feb 13 12:15:17.535: INFO: Created: latency-svc-7t7pm Feb 13 12:15:17.665: INFO: Got endpoints: latency-svc-7t7pm [4.392878499s] Feb 13 12:15:17.709: INFO: Created: latency-svc-zd5sn Feb 13 12:15:17.752: INFO: Got endpoints: latency-svc-zd5sn [4.33006557s] Feb 13 12:15:18.028: INFO: Created: latency-svc-lb6lh Feb 13 12:15:18.047: INFO: Got endpoints: latency-svc-lb6lh [4.554913437s] Feb 13 12:15:18.110: INFO: Created: latency-svc-56bsf Feb 13 12:15:18.458: INFO: Got endpoints: latency-svc-56bsf [4.795530832s] Feb 13 12:15:18.503: INFO: Created: latency-svc-4fs8b Feb 13 12:15:18.674: INFO: Created: latency-svc-l6r56 Feb 13 12:15:18.707: INFO: Got endpoints: latency-svc-l6r56 [4.520603684s] Feb 13 12:15:18.712: INFO: Got endpoints: latency-svc-4fs8b [4.892964361s] Feb 13 12:15:18.882: INFO: Created: latency-svc-xr4rr Feb 13 12:15:18.907: INFO: Got endpoints: latency-svc-xr4rr [4.720640861s] Feb 13 12:15:19.038: INFO: Created: latency-svc-dcmrj Feb 13 12:15:19.069: INFO: Got endpoints: latency-svc-dcmrj [4.809023064s] Feb 13 12:15:19.277: INFO: Created: latency-svc-2d9l2 Feb 13 12:15:19.277: INFO: Got endpoints: latency-svc-2d9l2 [4.662036758s] Feb 13 12:15:19.338: INFO: Created: latency-svc-s25vn Feb 13 12:15:19.478: INFO: Got endpoints: latency-svc-s25vn [4.53409712s] Feb 13 12:15:19.700: INFO: Created: latency-svc-cgkc8 Feb 13 12:15:19.736: INFO: Got endpoints: latency-svc-cgkc8 [2.811597945s] Feb 13 12:15:19.847: INFO: Created: latency-svc-qmnqx Feb 13 12:15:19.875: INFO: Got endpoints: latency-svc-qmnqx [2.893080112s] Feb 13 12:15:19.936: INFO: Created: latency-svc-fpqp8 Feb 13 12:15:20.051: INFO: Got endpoints: latency-svc-fpqp8 [2.87701085s] Feb 13 12:15:20.075: INFO: Created: latency-svc-6h5tl Feb 13 12:15:20.093: INFO: Got endpoints: latency-svc-6h5tl [2.756010156s] Feb 13 12:15:20.289: INFO: Created: latency-svc-52s7k Feb 13 12:15:20.296: INFO: Got endpoints: latency-svc-52s7k [2.817915758s] Feb 13 12:15:20.477: INFO: Created: latency-svc-7tb2z Feb 13 12:15:20.501: INFO: Got endpoints: latency-svc-7tb2z [2.836151602s] Feb 13 12:15:20.710: INFO: Created: latency-svc-fmzjw Feb 13 12:15:20.717: INFO: Got endpoints: latency-svc-fmzjw [2.964258922s] Feb 13 12:15:20.745: INFO: Created: latency-svc-9mjbx Feb 13 12:15:20.909: INFO: Got endpoints: latency-svc-9mjbx [2.862282578s] Feb 13 12:15:20.934: INFO: Created: latency-svc-rp5hf Feb 13 12:15:20.992: INFO: Got endpoints: latency-svc-rp5hf [2.533806857s] Feb 13 12:15:21.584: INFO: Created: latency-svc-hwchg Feb 13 12:15:21.602: INFO: Got endpoints: latency-svc-hwchg [2.894723346s] Feb 13 12:15:22.109: INFO: Created: latency-svc-lbvks Feb 13 12:15:22.124: INFO: Got endpoints: latency-svc-lbvks [3.411241656s] Feb 13 12:15:22.411: INFO: Created: latency-svc-zjdlq Feb 13 12:15:22.464: INFO: Got endpoints: latency-svc-zjdlq [3.557067082s] Feb 13 12:15:23.154: INFO: Created: latency-svc-xk7gm Feb 13 12:15:23.167: INFO: Got endpoints: latency-svc-xk7gm [4.098530298s] Feb 13 12:15:23.876: INFO: Created: latency-svc-b29hj Feb 13 12:15:23.918: INFO: Got endpoints: latency-svc-b29hj [4.641111407s] Feb 13 12:15:24.408: INFO: Created: latency-svc-srqms Feb 13 12:15:24.420: INFO: Got endpoints: latency-svc-srqms [4.942335627s] Feb 13 12:15:24.566: INFO: Created: latency-svc-ddx5j Feb 13 12:15:24.584: INFO: Got endpoints: latency-svc-ddx5j [4.847763585s] Feb 13 12:15:24.781: INFO: Created: latency-svc-p7hmn Feb 13 12:15:24.798: INFO: Got endpoints: latency-svc-p7hmn [4.923129703s] Feb 13 12:15:24.858: INFO: Created: latency-svc-x7g5v Feb 13 12:15:24.973: INFO: Got endpoints: latency-svc-x7g5v [4.922079808s] Feb 13 12:15:25.043: INFO: Created: latency-svc-qjzl8 Feb 13 12:15:25.212: INFO: Got endpoints: latency-svc-qjzl8 [5.118473768s] Feb 13 12:15:25.218: INFO: Created: latency-svc-fc44f Feb 13 12:15:25.325: INFO: Got endpoints: latency-svc-fc44f [5.028882294s] Feb 13 12:15:25.340: INFO: Created: latency-svc-4zr7c Feb 13 12:15:25.351: INFO: Got endpoints: latency-svc-4zr7c [4.849605289s] Feb 13 12:15:25.407: INFO: Created: latency-svc-m6hb8 Feb 13 12:15:25.416: INFO: Got endpoints: latency-svc-m6hb8 [4.698929551s] Feb 13 12:15:25.416: INFO: Latencies: [233.31433ms 389.145731ms 422.19047ms 649.691445ms 767.992743ms 1.00796611s 1.052064553s 1.237725943s 1.294010736s 1.516961344s 1.787329004s 1.794229234s 1.872955783s 1.917189806s 1.927792531s 1.947472978s 1.950349076s 1.951508246s 1.964706652s 1.975623016s 2.003022102s 2.004566425s 2.02430726s 2.036177735s 2.05090819s 2.055055179s 2.058997351s 2.060401493s 2.069023584s 2.069075349s 2.086587717s 2.094709363s 2.101554305s 2.102851513s 2.112743063s 2.122241909s 2.122493032s 2.128272996s 2.129450609s 2.134650867s 2.150701672s 2.161655173s 2.164147641s 2.164969529s 2.174718588s 2.175328059s 2.184432602s 2.185115569s 2.195116666s 2.201890798s 2.203715006s 2.215748495s 2.215809213s 2.217211498s 2.21923193s 2.227936692s 2.228424732s 2.241104909s 2.244438298s 2.249553426s 2.25467448s 2.262130036s 2.267270381s 2.276590766s 2.281594923s 2.282777482s 2.287992833s 2.288851906s 2.288991894s 2.290429902s 2.298749473s 2.302128956s 2.303820831s 2.306945346s 2.318445042s 2.31920943s 2.319799019s 2.320787692s 2.326101774s 2.334143677s 2.343065077s 2.351638638s 2.362675056s 2.363408692s 2.36952097s 2.369904533s 2.373240292s 2.373665874s 2.380571987s 2.380585644s 2.389634077s 2.394958039s 2.401579421s 2.406760132s 2.411042196s 2.420692697s 2.426345474s 2.427239111s 2.448902231s 2.450513676s 2.452415369s 2.456647298s 2.470715409s 2.473510109s 2.47898471s 2.492775041s 2.492775545s 2.497180048s 2.49772642s 2.502355387s 2.503696805s 2.527227568s 2.533806857s 2.551617342s 2.563298111s 2.577703654s 2.580051841s 2.602734789s 2.608232052s 2.608559734s 2.62570402s 2.634003488s 2.636365428s 2.636860653s 2.643361651s 2.644143376s 2.649448844s 2.667448091s 2.67151531s 2.674189946s 2.688038915s 2.688226575s 2.709607502s 2.738200544s 2.756010156s 2.766829938s 2.771945886s 2.781820136s 2.789110808s 2.794846521s 2.803876685s 2.808578822s 2.809532911s 2.811597945s 2.817915758s 2.836151602s 2.840661165s 2.862282578s 2.873464879s 2.87701085s 2.893080112s 2.894723346s 2.89647425s 2.936022569s 2.946059257s 2.964258922s 2.965044162s 2.982452463s 2.983913672s 2.993676788s 3.013890251s 3.080226232s 3.23537027s 3.24954242s 3.254829327s 3.305971766s 3.373300307s 3.386625405s 3.411241656s 3.451318902s 3.5388605s 3.557067082s 3.558602711s 3.696862244s 3.70404158s 4.098530298s 4.322705655s 4.329850566s 4.33006557s 4.392878499s 4.430230055s 4.430636333s 4.461970294s 4.520603684s 4.53409712s 4.554913437s 4.641111407s 4.662036758s 4.698929551s 4.720640861s 4.795530832s 4.809023064s 4.847763585s 4.849605289s 4.892964361s 4.922079808s 4.923129703s 4.942335627s 5.028882294s 5.118473768s] Feb 13 12:15:25.416: INFO: 50 %ile: 2.452415369s Feb 13 12:15:25.416: INFO: 90 %ile: 4.430230055s Feb 13 12:15:25.416: INFO: 99 %ile: 5.028882294s Feb 13 12:15:25.416: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:15:25.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-6p2g6" for this suite. Feb 13 12:16:33.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:16:33.688: INFO: namespace: e2e-tests-svc-latency-6p2g6, resource: bindings, ignored listing per whitelist Feb 13 12:16:33.942: INFO: namespace e2e-tests-svc-latency-6p2g6 deletion completed in 1m8.520802629s • [SLOW TEST:116.759 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:16:33.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 12:16:34.311: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 13 12:16:34.412: INFO: Number of nodes with available pods: 0 Feb 13 12:16:34.412: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:35.445: INFO: Number of nodes with available pods: 0 Feb 13 12:16:35.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:37.107: INFO: Number of nodes with available pods: 0 Feb 13 12:16:37.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:37.445: INFO: Number of nodes with available pods: 0 Feb 13 12:16:37.445: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:38.434: INFO: Number of nodes with available pods: 0 Feb 13 12:16:38.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:39.503: INFO: Number of nodes with available pods: 0 Feb 13 12:16:39.503: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:41.396: INFO: Number of nodes with available pods: 0 Feb 13 12:16:41.396: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:41.674: INFO: Number of nodes with available pods: 0 Feb 13 12:16:41.674: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:42.449: INFO: Number of nodes with available pods: 0 Feb 13 12:16:42.449: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:43.446: INFO: Number of nodes with available pods: 0 Feb 13 12:16:43.446: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:16:44.433: INFO: Number of nodes with available pods: 1 Feb 13 12:16:44.434: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 13 12:16:44.534: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:45.556: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:46.607: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:47.564: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:48.566: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:49.573: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:50.611: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:51.565: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:51.565: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:52.572: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:52.572: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:53.562: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:53.562: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:54.575: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:54.575: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:55.565: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:55.565: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:56.571: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:56.571: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:57.564: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:57.564: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:58.582: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:58.582: INFO: Pod daemon-set-zklnl is not available Feb 13 12:16:59.673: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:16:59.673: INFO: Pod daemon-set-zklnl is not available Feb 13 12:17:00.612: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:17:00.612: INFO: Pod daemon-set-zklnl is not available Feb 13 12:17:01.567: INFO: Wrong image for pod: daemon-set-zklnl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 13 12:17:01.567: INFO: Pod daemon-set-zklnl is not available Feb 13 12:17:03.559: INFO: Pod daemon-set-w2nb6 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 13 12:17:03.573: INFO: Number of nodes with available pods: 0 Feb 13 12:17:03.573: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:04.693: INFO: Number of nodes with available pods: 0 Feb 13 12:17:04.693: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:05.600: INFO: Number of nodes with available pods: 0 Feb 13 12:17:05.600: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:06.637: INFO: Number of nodes with available pods: 0 Feb 13 12:17:06.637: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:09.511: INFO: Number of nodes with available pods: 0 Feb 13 12:17:09.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:09.610: INFO: Number of nodes with available pods: 0 Feb 13 12:17:09.610: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:10.639: INFO: Number of nodes with available pods: 0 Feb 13 12:17:10.639: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 13 12:17:11.639: INFO: Number of nodes with available pods: 1 Feb 13 12:17:11.639: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-89s4h, will wait for the garbage collector to delete the pods Feb 13 12:17:11.757: INFO: Deleting DaemonSet.extensions daemon-set took: 29.964398ms Feb 13 12:17:11.958: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.295546ms Feb 13 12:17:18.806: INFO: Number of nodes with available pods: 0 Feb 13 12:17:18.806: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 12:17:18.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-89s4h/daemonsets","resourceVersion":"21532136"},"items":null} Feb 13 12:17:18.823: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-89s4h/pods","resourceVersion":"21532136"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:17:18.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-89s4h" for this suite. Feb 13 12:17:24.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:17:25.072: INFO: namespace: e2e-tests-daemonsets-89s4h, resource: bindings, ignored listing per whitelist Feb 13 12:17:25.190: INFO: namespace e2e-tests-daemonsets-89s4h deletion completed in 6.352996806s • [SLOW TEST:51.247 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:17:25.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 12:17:25.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-bnmjm" to be "success or failure" Feb 13 12:17:25.530: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 26.719566ms Feb 13 12:17:27.736: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232268133s Feb 13 12:17:29.971: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.466886824s Feb 13 12:17:31.979: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475264935s Feb 13 12:17:34.044: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540352763s Feb 13 12:17:36.065: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.561201707s Feb 13 12:17:38.076: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.572079745s STEP: Saw pod success Feb 13 12:17:38.076: INFO: Pod "downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:17:38.079: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 12:17:38.339: INFO: Waiting for pod downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007 to disappear Feb 13 12:17:38.355: INFO: Pod downwardapi-volume-cb5f0568-4e5a-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:17:38.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bnmjm" for this suite. Feb 13 12:17:44.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:17:44.727: INFO: namespace: e2e-tests-projected-bnmjm, resource: bindings, ignored listing per whitelist Feb 13 12:17:44.736: INFO: namespace e2e-tests-projected-bnmjm deletion completed in 6.37185639s • [SLOW TEST:19.545 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:17:44.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-jg97 STEP: Creating a pod to test atomic-volume-subpath Feb 13 12:17:44.985: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jg97" in namespace "e2e-tests-subpath-dwmqq" to be "success or failure" Feb 13 12:17:44.999: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 13.915538ms Feb 13 12:17:47.021: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035010393s Feb 13 12:17:49.289: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303488654s Feb 13 12:17:51.737: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.751587857s Feb 13 12:17:53.753: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.76780711s Feb 13 12:17:55.767: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.781772337s Feb 13 12:17:58.160: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 13.174087977s Feb 13 12:18:00.628: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Pending", Reason="", readiness=false. Elapsed: 15.642102699s Feb 13 12:18:02.643: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 17.657506165s Feb 13 12:18:04.661: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 19.67558626s Feb 13 12:18:06.713: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 21.727169442s Feb 13 12:18:08.733: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 23.747452691s Feb 13 12:18:10.750: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 25.764881966s Feb 13 12:18:12.767: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 27.78185727s Feb 13 12:18:14.787: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 29.801591181s Feb 13 12:18:16.802: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 31.816473634s Feb 13 12:18:18.898: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Running", Reason="", readiness=false. Elapsed: 33.912506508s Feb 13 12:18:20.927: INFO: Pod "pod-subpath-test-projected-jg97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.941558874s STEP: Saw pod success Feb 13 12:18:20.927: INFO: Pod "pod-subpath-test-projected-jg97" satisfied condition "success or failure" Feb 13 12:18:20.941: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-jg97 container test-container-subpath-projected-jg97: STEP: delete the pod Feb 13 12:18:21.771: INFO: Waiting for pod pod-subpath-test-projected-jg97 to disappear Feb 13 12:18:22.091: INFO: Pod pod-subpath-test-projected-jg97 no longer exists STEP: Deleting pod pod-subpath-test-projected-jg97 Feb 13 12:18:22.091: INFO: Deleting pod "pod-subpath-test-projected-jg97" in namespace "e2e-tests-subpath-dwmqq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:18:22.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dwmqq" for this suite. Feb 13 12:18:28.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:18:28.404: INFO: namespace: e2e-tests-subpath-dwmqq, resource: bindings, ignored listing per whitelist Feb 13 12:18:28.454: INFO: namespace e2e-tests-subpath-dwmqq deletion completed in 6.339203478s • [SLOW TEST:43.717 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:18:28.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f10cf9a9-4e5a-11ea-aba9-0242ac110007 STEP: Creating a pod to test consume secrets Feb 13 12:18:28.671: INFO: Waiting up to 5m0s for pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-9b6th" to be "success or failure" Feb 13 12:18:28.688: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.590454ms Feb 13 12:18:30.971: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300232318s Feb 13 12:18:32.988: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317192626s Feb 13 12:18:35.084: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413475118s Feb 13 12:18:37.109: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.438597958s Feb 13 12:18:39.127: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.455960244s STEP: Saw pod success Feb 13 12:18:39.127: INFO: Pod "pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:18:39.135: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007 container secret-volume-test: STEP: delete the pod Feb 13 12:18:39.298: INFO: Waiting for pod pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007 to disappear Feb 13 12:18:39.304: INFO: Pod pod-secrets-f10e8a22-4e5a-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:18:39.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9b6th" for this suite. Feb 13 12:18:45.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:18:45.413: INFO: namespace: e2e-tests-secrets-9b6th, resource: bindings, ignored listing per whitelist Feb 13 12:18:45.727: INFO: namespace e2e-tests-secrets-9b6th deletion completed in 6.418410267s • [SLOW TEST:17.273 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:18:45.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-fb57db1c-4e5a-11ea-aba9-0242ac110007 STEP: Creating configMap with name cm-test-opt-upd-fb57dc08-4e5a-11ea-aba9-0242ac110007 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fb57db1c-4e5a-11ea-aba9-0242ac110007 STEP: Updating configmap cm-test-opt-upd-fb57dc08-4e5a-11ea-aba9-0242ac110007 STEP: Creating configMap with name cm-test-opt-create-fb57dc53-4e5a-11ea-aba9-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:19:08.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-j2psh" for this suite. Feb 13 12:19:32.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:19:32.496: INFO: namespace: e2e-tests-configmap-j2psh, resource: bindings, ignored listing per whitelist Feb 13 12:19:32.671: INFO: namespace e2e-tests-configmap-j2psh deletion completed in 24.323476749s • [SLOW TEST:46.943 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:19:32.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 13 12:19:33.450: INFO: created pod pod-service-account-defaultsa Feb 13 12:19:33.450: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 13 12:19:33.493: INFO: created pod pod-service-account-mountsa Feb 13 12:19:33.493: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 13 12:19:33.632: INFO: created pod pod-service-account-nomountsa Feb 13 12:19:33.632: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 13 12:19:33.858: INFO: created pod pod-service-account-defaultsa-mountspec Feb 13 12:19:33.858: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 13 12:19:33.965: INFO: created pod pod-service-account-mountsa-mountspec Feb 13 12:19:33.965: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 13 12:19:34.002: INFO: created pod pod-service-account-nomountsa-mountspec Feb 13 12:19:34.002: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 13 12:19:34.026: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 13 12:19:34.026: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 13 12:19:34.112: INFO: created pod pod-service-account-mountsa-nomountspec Feb 13 12:19:34.112: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 13 12:19:34.166: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 13 12:19:34.166: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:19:34.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-kdcp6" for this suite. Feb 13 12:20:03.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:20:04.031: INFO: namespace: e2e-tests-svcaccounts-kdcp6, resource: bindings, ignored listing per whitelist Feb 13 12:20:04.100: INFO: namespace e2e-tests-svcaccounts-kdcp6 deletion completed in 29.905598164s • [SLOW TEST:31.429 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:20:04.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0213 12:20:19.033557 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 12:20:19.033: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:20:19.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2jfml" for this suite. Feb 13 12:20:42.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:20:42.261: INFO: namespace: e2e-tests-gc-2jfml, resource: bindings, ignored listing per whitelist Feb 13 12:20:42.338: INFO: namespace e2e-tests-gc-2jfml deletion completed in 23.28910447s • [SLOW TEST:38.238 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:20:42.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 13 12:20:44.673: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 13 12:20:49.692: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:20:50.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-tbjjp" for this suite. Feb 13 12:21:01.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:21:01.122: INFO: namespace: e2e-tests-replication-controller-tbjjp, resource: bindings, ignored listing per whitelist Feb 13 12:21:01.135: INFO: namespace e2e-tests-replication-controller-tbjjp deletion completed in 10.162402007s • [SLOW TEST:18.796 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:21:01.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Feb 13 12:21:03.610: INFO: Waiting up to 5m0s for pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-zjkf8" to be "success or failure" Feb 13 12:21:03.690: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 79.658699ms Feb 13 12:21:05.924: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313163415s Feb 13 12:21:07.935: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324761585s Feb 13 12:21:10.284: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673808337s Feb 13 12:21:12.313: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702272692s Feb 13 12:21:14.402: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.792085499s STEP: Saw pod success Feb 13 12:21:14.403: INFO: Pod "pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:21:14.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007 container test-container: STEP: delete the pod Feb 13 12:21:14.612: INFO: Waiting for pod pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007 to disappear Feb 13 12:21:14.632: INFO: Pod pod-4d5cde4d-4e5b-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:21:14.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zjkf8" for this suite. Feb 13 12:21:20.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:21:20.895: INFO: namespace: e2e-tests-emptydir-zjkf8, resource: bindings, ignored listing per whitelist Feb 13 12:21:21.000: INFO: namespace e2e-tests-emptydir-zjkf8 deletion completed in 6.254636385s • [SLOW TEST:19.864 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:21:21.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-57e9f113-4e5b-11ea-aba9-0242ac110007 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-57e9f113-4e5b-11ea-aba9-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:21:33.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k6xgr" for this suite. Feb 13 12:21:57.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:21:57.638: INFO: namespace: e2e-tests-configmap-k6xgr, resource: bindings, ignored listing per whitelist Feb 13 12:21:57.687: INFO: namespace e2e-tests-configmap-k6xgr deletion completed in 24.191742163s • [SLOW TEST:36.688 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:21:57.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 12:21:57.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-5gd24" to be "success or failure" Feb 13 12:21:57.992: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.43497ms Feb 13 12:22:00.032: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056529045s Feb 13 12:22:02.063: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086948259s Feb 13 12:22:04.184: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208282956s Feb 13 12:22:06.204: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22856527s Feb 13 12:22:08.673: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.696720379s STEP: Saw pod success Feb 13 12:22:08.673: INFO: Pod "downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:22:08.690: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 12:22:10.004: INFO: Waiting for pod downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007 to disappear Feb 13 12:22:10.012: INFO: Pod downwardapi-volume-6dc66fd0-4e5b-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:22:10.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5gd24" for this suite. Feb 13 12:22:16.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:22:16.318: INFO: namespace: e2e-tests-projected-5gd24, resource: bindings, ignored listing per whitelist Feb 13 12:22:16.364: INFO: namespace e2e-tests-projected-5gd24 deletion completed in 6.343621553s • [SLOW TEST:18.677 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:22:16.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 12:22:16.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-w4c26" to be "success or failure" Feb 13 12:22:16.807: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.922088ms Feb 13 12:22:18.844: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057352584s Feb 13 12:22:20.973: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185844926s Feb 13 12:22:23.488: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701531285s Feb 13 12:22:25.498: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.711607312s Feb 13 12:22:27.510: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.722879648s Feb 13 12:22:29.529: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.741876301s STEP: Saw pod success Feb 13 12:22:29.529: INFO: Pod "downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:22:29.542: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 12:22:30.716: INFO: Waiting for pod downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007 to disappear Feb 13 12:22:30.949: INFO: Pod downwardapi-volume-78f8ec37-4e5b-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:22:30.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w4c26" for this suite. Feb 13 12:22:37.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:22:37.304: INFO: namespace: e2e-tests-projected-w4c26, resource: bindings, ignored listing per whitelist Feb 13 12:22:37.379: INFO: namespace e2e-tests-projected-w4c26 deletion completed in 6.396308965s • [SLOW TEST:21.013 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:22:37.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 13 12:22:49.833: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-8570a4cc-4e5b-11ea-aba9-0242ac110007", GenerateName:"", Namespace:"e2e-tests-pods-mzdb2", SelfLink:"/api/v1/namespaces/e2e-tests-pods-mzdb2/pods/pod-submit-remove-8570a4cc-4e5b-11ea-aba9-0242ac110007", UID:"857249c9-4e5b-11ea-a994-fa163e34d433", ResourceVersion:"21533019", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717193357, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"596408756"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cghk5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002708e40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cghk5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026ef018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00281e480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026ef050)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026ef070)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026ef078), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026ef07c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717193357, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717193367, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717193367, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717193357, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001feaa00), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001feaa20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://6c64a707985319f95df4da341aa92d6ef3195cb6239f9f19587b301d13010ca5"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:22:55.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mzdb2" for this suite. Feb 13 12:23:01.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:23:01.750: INFO: namespace: e2e-tests-pods-mzdb2, resource: bindings, ignored listing per whitelist Feb 13 12:23:01.848: INFO: namespace e2e-tests-pods-mzdb2 deletion completed in 6.275279497s • [SLOW TEST:24.469 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:23:01.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-m45n5.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-m45n5.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-m45n5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-m45n5.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-m45n5.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-m45n5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 13 12:23:20.351: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.356: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.368: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.376: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.381: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.385: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.440: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-m45n5.svc.cluster.local from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.544: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.565: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.575: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-940765b7-4e5b-11ea-aba9-0242ac110007) Feb 13 12:23:20.575: INFO: Lookups using e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-m45n5.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 13 12:23:25.803: INFO: DNS probes using e2e-tests-dns-m45n5/dns-test-940765b7-4e5b-11ea-aba9-0242ac110007 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:23:25.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-m45n5" for this suite. Feb 13 12:23:34.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:23:34.261: INFO: namespace: e2e-tests-dns-m45n5, resource: bindings, ignored listing per whitelist Feb 13 12:23:34.276: INFO: namespace e2e-tests-dns-m45n5 deletion completed in 8.299639836s • [SLOW TEST:32.427 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:23:34.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 13 12:23:34.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-nvdcd" to be "success or failure" Feb 13 12:23:34.438: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.711369ms Feb 13 12:23:36.585: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158191808s Feb 13 12:23:38.650: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223673663s Feb 13 12:23:41.212: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.78507715s Feb 13 12:23:43.225: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.798407753s Feb 13 12:23:47.159: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.732264105s Feb 13 12:23:50.935: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.507859264s STEP: Saw pod success Feb 13 12:23:50.935: INFO: Pod "downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007" satisfied condition "success or failure" Feb 13 12:23:50.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007 container client-container: STEP: delete the pod Feb 13 12:23:51.543: INFO: Waiting for pod downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007 to disappear Feb 13 12:23:51.551: INFO: Pod downwardapi-volume-a74d28ad-4e5b-11ea-aba9-0242ac110007 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:23:51.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nvdcd" for this suite. Feb 13 12:23:57.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:23:57.704: INFO: namespace: e2e-tests-downward-api-nvdcd, resource: bindings, ignored listing per whitelist Feb 13 12:23:58.070: INFO: namespace e2e-tests-downward-api-nvdcd deletion completed in 6.514178693s • [SLOW TEST:23.795 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:23:58.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 12:23:58.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 13 12:23:58.403: INFO: stderr: "" Feb 13 12:23:58.403: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:23:58.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fnp9d" for this suite. Feb 13 12:24:04.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:24:04.704: INFO: namespace: e2e-tests-kubectl-fnp9d, resource: bindings, ignored listing per whitelist Feb 13 12:24:04.739: INFO: namespace e2e-tests-kubectl-fnp9d deletion completed in 6.306653067s • [SLOW TEST:6.668 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:24:04.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rn8t5 Feb 13 12:24:15.011: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rn8t5 STEP: checking the pod's current state and verifying that restartCount is present Feb 13 12:24:15.018: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:28:16.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rn8t5" for this suite. Feb 13 12:28:25.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:28:25.132: INFO: namespace: e2e-tests-container-probe-rn8t5, resource: bindings, ignored listing per whitelist Feb 13 12:28:25.324: INFO: namespace e2e-tests-container-probe-rn8t5 deletion completed in 8.403917389s • [SLOW TEST:260.585 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:28:25.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-54de8960-4e5c-11ea-aba9-0242ac110007 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-54de8960-4e5c-11ea-aba9-0242ac110007 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 13 12:29:54.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rbh8k" for this suite. Feb 13 12:30:20.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 13 12:30:21.062: INFO: namespace: e2e-tests-projected-rbh8k, resource: bindings, ignored listing per whitelist Feb 13 12:30:21.229: INFO: namespace e2e-tests-projected-rbh8k deletion completed in 26.327883287s • [SLOW TEST:115.904 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 13 12:30:21.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 13 12:30:21.497: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 19.885071ms)
Feb 13 12:30:21.505: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.072139ms)
Feb 13 12:30:21.512: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.002633ms)
Feb 13 12:30:21.517: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.513732ms)
Feb 13 12:30:21.523: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.927082ms)
Feb 13 12:30:21.529: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.216307ms)
Feb 13 12:30:21.537: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.544809ms)
Feb 13 12:30:21.542: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.884373ms)
Feb 13 12:30:21.548: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.986173ms)
Feb 13 12:30:21.552: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.314717ms)
Feb 13 12:30:21.556: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.237426ms)
Feb 13 12:30:21.561: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.839561ms)
Feb 13 12:30:21.566: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.533616ms)
Feb 13 12:30:21.571: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.860951ms)
Feb 13 12:30:21.575: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.076563ms)
Feb 13 12:30:21.580: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.762001ms)
Feb 13 12:30:21.584: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.003326ms)
Feb 13 12:30:21.589: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.992018ms)
Feb 13 12:30:21.593: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.036376ms)
Feb 13 12:30:21.598: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.86565ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:30:21.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-w7qk9" for this suite.
Feb 13 12:30:27.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:30:27.803: INFO: namespace: e2e-tests-proxy-w7qk9, resource: bindings, ignored listing per whitelist
Feb 13 12:30:27.921: INFO: namespace e2e-tests-proxy-w7qk9 deletion completed in 6.31866654s

• [SLOW TEST:6.692 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:30:27.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-9ddeb547-4e5c-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 12:30:28.114: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-fcrdm" to be "success or failure"
Feb 13 12:30:28.117: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.785462ms
Feb 13 12:30:30.544: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.430636138s
Feb 13 12:30:32.571: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456930541s
Feb 13 12:30:35.134: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.019957757s
Feb 13 12:30:37.149: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.035551277s
Feb 13 12:30:39.177: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.063567004s
STEP: Saw pod success
Feb 13 12:30:39.177: INFO: Pod "pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:30:39.185: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 12:30:39.320: INFO: Waiting for pod pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007 to disappear
Feb 13 12:30:39.336: INFO: Pod pod-projected-configmaps-9de00a33-4e5c-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:30:39.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fcrdm" for this suite.
Feb 13 12:30:45.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:30:45.458: INFO: namespace: e2e-tests-projected-fcrdm, resource: bindings, ignored listing per whitelist
Feb 13 12:30:45.542: INFO: namespace e2e-tests-projected-fcrdm deletion completed in 6.190089253s

• [SLOW TEST:17.620 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:30:45.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 12:30:45.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-f9cql'
Feb 13 12:30:48.041: INFO: stderr: ""
Feb 13 12:30:48.042: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 13 12:30:58.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-f9cql -o json'
Feb 13 12:30:58.285: INFO: stderr: ""
Feb 13 12:30:58.286: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-13T12:30:47Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-f9cql\",\n        \"resourceVersion\": \"21533768\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-f9cql/pods/e2e-test-nginx-pod\",\n        \"uid\": \"a9bd57f5-4e5c-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-8c5dd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-8c5dd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-8c5dd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T12:30:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T12:30:56Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T12:30:56Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-13T12:30:48Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://a143b91b8015dfa0d02466fb8cdf6983efcd5e0414ef4f834be4f2d9606d1aa6\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-13T12:30:55Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-13T12:30:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 13 12:30:58.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-f9cql'
Feb 13 12:30:58.803: INFO: stderr: ""
Feb 13 12:30:58.803: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb 13 12:30:58.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-f9cql'
Feb 13 12:31:07.129: INFO: stderr: ""
Feb 13 12:31:07.129: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:31:07.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f9cql" for this suite.
Feb 13 12:31:13.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:31:13.335: INFO: namespace: e2e-tests-kubectl-f9cql, resource: bindings, ignored listing per whitelist
Feb 13 12:31:13.428: INFO: namespace e2e-tests-kubectl-f9cql deletion completed in 6.218439461s

• [SLOW TEST:27.885 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:31:13.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 13 12:31:13.639: INFO: Waiting up to 5m0s for pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-7k8zv" to be "success or failure"
Feb 13 12:31:13.708: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 68.909052ms
Feb 13 12:31:15.723: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08427361s
Feb 13 12:31:17.748: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109111103s
Feb 13 12:31:19.764: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125354582s
Feb 13 12:31:21.888: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24966641s
Feb 13 12:31:24.088: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 10.449242503s
Feb 13 12:31:26.103: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.464386346s
STEP: Saw pod success
Feb 13 12:31:26.103: INFO: Pod "pod-b901daa9-4e5c-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:31:26.106: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b901daa9-4e5c-11ea-aba9-0242ac110007 container test-container: 
STEP: delete the pod
Feb 13 12:31:26.784: INFO: Waiting for pod pod-b901daa9-4e5c-11ea-aba9-0242ac110007 to disappear
Feb 13 12:31:28.069: INFO: Pod pod-b901daa9-4e5c-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:31:28.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7k8zv" for this suite.
Feb 13 12:31:34.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:31:34.495: INFO: namespace: e2e-tests-emptydir-7k8zv, resource: bindings, ignored listing per whitelist
Feb 13 12:31:34.783: INFO: namespace e2e-tests-emptydir-7k8zv deletion completed in 6.69641898s

• [SLOW TEST:21.355 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:31:34.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb 13 12:31:34.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rm4jl'
Feb 13 12:31:35.412: INFO: stderr: ""
Feb 13 12:31:35.412: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb 13 12:31:36.971: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:36.971: INFO: Found 0 / 1
Feb 13 12:31:37.570: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:37.570: INFO: Found 0 / 1
Feb 13 12:31:38.429: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:38.429: INFO: Found 0 / 1
Feb 13 12:31:39.426: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:39.426: INFO: Found 0 / 1
Feb 13 12:31:40.451: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:40.451: INFO: Found 0 / 1
Feb 13 12:31:41.654: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:41.654: INFO: Found 0 / 1
Feb 13 12:31:42.469: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:42.470: INFO: Found 0 / 1
Feb 13 12:31:43.424: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:43.424: INFO: Found 0 / 1
Feb 13 12:31:44.430: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:44.430: INFO: Found 0 / 1
Feb 13 12:31:45.429: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:45.429: INFO: Found 1 / 1
Feb 13 12:31:45.429: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 13 12:31:45.438: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:31:45.438: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 13 12:31:45.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ls2rn redis-master --namespace=e2e-tests-kubectl-rm4jl'
Feb 13 12:31:45.636: INFO: stderr: ""
Feb 13 12:31:45.636: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 13 Feb 12:31:43.214 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Feb 12:31:43.214 # Server started, Redis version 3.2.12\n1:M 13 Feb 12:31:43.214 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Feb 12:31:43.214 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 13 12:31:45.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ls2rn redis-master --namespace=e2e-tests-kubectl-rm4jl --tail=1'
Feb 13 12:31:45.824: INFO: stderr: ""
Feb 13 12:31:45.824: INFO: stdout: "1:M 13 Feb 12:31:43.214 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 13 12:31:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ls2rn redis-master --namespace=e2e-tests-kubectl-rm4jl --limit-bytes=1'
Feb 13 12:31:46.011: INFO: stderr: ""
Feb 13 12:31:46.011: INFO: stdout: " "
STEP: exposing timestamps
Feb 13 12:31:46.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ls2rn redis-master --namespace=e2e-tests-kubectl-rm4jl --tail=1 --timestamps'
Feb 13 12:31:46.168: INFO: stderr: ""
Feb 13 12:31:46.168: INFO: stdout: "2020-02-13T12:31:43.215043383Z 1:M 13 Feb 12:31:43.214 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 13 12:31:48.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ls2rn redis-master --namespace=e2e-tests-kubectl-rm4jl --since=1s'
Feb 13 12:31:48.948: INFO: stderr: ""
Feb 13 12:31:48.948: INFO: stdout: ""
Feb 13 12:31:48.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ls2rn redis-master --namespace=e2e-tests-kubectl-rm4jl --since=24h'
Feb 13 12:31:49.189: INFO: stderr: ""
Feb 13 12:31:49.189: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 13 Feb 12:31:43.214 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 13 Feb 12:31:43.214 # Server started, Redis version 3.2.12\n1:M 13 Feb 12:31:43.214 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 13 Feb 12:31:43.214 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb 13 12:31:49.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rm4jl'
Feb 13 12:31:49.313: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:31:49.313: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 13 12:31:49.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-rm4jl'
Feb 13 12:31:49.445: INFO: stderr: "No resources found.\n"
Feb 13 12:31:49.445: INFO: stdout: ""
Feb 13 12:31:49.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-rm4jl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 13 12:31:49.600: INFO: stderr: ""
Feb 13 12:31:49.600: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:31:49.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rm4jl" for this suite.
Feb 13 12:31:55.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:31:55.776: INFO: namespace: e2e-tests-kubectl-rm4jl, resource: bindings, ignored listing per whitelist
Feb 13 12:31:55.874: INFO: namespace e2e-tests-kubectl-rm4jl deletion completed in 6.215136331s

• [SLOW TEST:21.090 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:31:55.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d25732ce-4e5c-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 13 12:31:56.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-9zpbl" to be "success or failure"
Feb 13 12:31:56.161: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.371665ms
Feb 13 12:31:58.172: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044350915s
Feb 13 12:32:00.193: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065797082s
Feb 13 12:32:03.549: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.421706348s
Feb 13 12:32:06.518: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390376356s
Feb 13 12:32:08.577: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.45010823s
STEP: Saw pod success
Feb 13 12:32:08.578: INFO: Pod "pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:32:08.596: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 13 12:32:08.841: INFO: Waiting for pod pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007 to disappear
Feb 13 12:32:08.860: INFO: Pod pod-projected-secrets-d257fac3-4e5c-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:32:08.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9zpbl" for this suite.
Feb 13 12:32:14.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:32:15.117: INFO: namespace: e2e-tests-projected-9zpbl, resource: bindings, ignored listing per whitelist
Feb 13 12:32:15.153: INFO: namespace e2e-tests-projected-9zpbl deletion completed in 6.274203473s

• [SLOW TEST:19.279 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:32:15.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 13 12:32:15.303: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:32:34.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-jv99t" for this suite.
Feb 13 12:32:40.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:32:40.721: INFO: namespace: e2e-tests-init-container-jv99t, resource: bindings, ignored listing per whitelist
Feb 13 12:32:40.730: INFO: namespace e2e-tests-init-container-jv99t deletion completed in 6.201104265s

• [SLOW TEST:25.577 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:32:40.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 13 12:32:40.959: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534023,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 12:32:40.959: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534023,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 13 12:32:50.988: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534036,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 13 12:32:50.988: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534036,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 13 12:33:01.011: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534048,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 12:33:01.011: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534048,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 13 12:33:11.035: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534061,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 12:33:11.035: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-a,UID:ed11ceef-4e5c-11ea-a994-fa163e34d433,ResourceVersion:21534061,Generation:0,CreationTimestamp:2020-02-13 12:32:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 13 12:33:21.069: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-b,UID:04f773c7-4e5d-11ea-a994-fa163e34d433,ResourceVersion:21534074,Generation:0,CreationTimestamp:2020-02-13 12:33:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 12:33:21.069: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-b,UID:04f773c7-4e5d-11ea-a994-fa163e34d433,ResourceVersion:21534074,Generation:0,CreationTimestamp:2020-02-13 12:33:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 13 12:33:31.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-b,UID:04f773c7-4e5d-11ea-a994-fa163e34d433,ResourceVersion:21534087,Generation:0,CreationTimestamp:2020-02-13 12:33:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 13 12:33:31.114: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-lqt8h,SelfLink:/api/v1/namespaces/e2e-tests-watch-lqt8h/configmaps/e2e-watch-test-configmap-b,UID:04f773c7-4e5d-11ea-a994-fa163e34d433,ResourceVersion:21534087,Generation:0,CreationTimestamp:2020-02-13 12:33:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:33:41.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-lqt8h" for this suite.
Feb 13 12:33:47.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:33:47.433: INFO: namespace: e2e-tests-watch-lqt8h, resource: bindings, ignored listing per whitelist
Feb 13 12:33:47.641: INFO: namespace e2e-tests-watch-lqt8h deletion completed in 6.511105671s

• [SLOW TEST:66.911 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:33:47.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 13 12:33:47.896: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 13 12:33:47.906: INFO: Waiting for terminating namespaces to be deleted...
Feb 13 12:33:47.909: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 13 12:33:47.925: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 13 12:33:47.925: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 13 12:33:47.925: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 13 12:33:47.925: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 13 12:33:47.925: INFO: 	Container weave ready: true, restart count 0
Feb 13 12:33:47.925: INFO: 	Container weave-npc ready: true, restart count 0
Feb 13 12:33:47.925: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 13 12:33:47.925: INFO: 	Container coredns ready: true, restart count 0
Feb 13 12:33:47.925: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 13 12:33:47.925: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 13 12:33:47.925: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 13 12:33:47.925: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 13 12:33:47.925: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 13 12:33:48.004: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1508cc71-4e5d-11ea-aba9-0242ac110007.15f2f64c9c4f1754], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-98ltn/filler-pod-1508cc71-4e5d-11ea-aba9-0242ac110007 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1508cc71-4e5d-11ea-aba9-0242ac110007.15f2f64d91958fa3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1508cc71-4e5d-11ea-aba9-0242ac110007.15f2f64e438da174], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1508cc71-4e5d-11ea-aba9-0242ac110007.15f2f64e74f06ebe], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f2f64ef14aa355], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:33:59.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-98ltn" for this suite.
Feb 13 12:34:07.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:34:07.451: INFO: namespace: e2e-tests-sched-pred-98ltn, resource: bindings, ignored listing per whitelist
Feb 13 12:34:07.506: INFO: namespace e2e-tests-sched-pred-98ltn deletion completed in 8.255207364s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:19.865 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:34:07.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 13 12:34:07.872: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb 13 12:34:07.881: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gd2kl/daemonsets","resourceVersion":"21534176"},"items":null}

Feb 13 12:34:07.884: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gd2kl/pods","resourceVersion":"21534176"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:34:07.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gd2kl" for this suite.
Feb 13 12:34:14.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:34:15.126: INFO: namespace: e2e-tests-daemonsets-gd2kl, resource: bindings, ignored listing per whitelist
Feb 13 12:34:15.234: INFO: namespace e2e-tests-daemonsets-gd2kl deletion completed in 7.334634518s

S [SKIPPING] [7.727 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb 13 12:34:07.872: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:34:15.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 13 12:34:15.508: INFO: Waiting up to 5m0s for pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007" in namespace "e2e-tests-containers-467sw" to be "success or failure"
Feb 13 12:34:15.522: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.596041ms
Feb 13 12:34:17.758: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249548978s
Feb 13 12:34:19.781: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272619971s
Feb 13 12:34:21.902: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393692476s
Feb 13 12:34:23.931: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.422382217s
Feb 13 12:34:25.950: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.441573218s
STEP: Saw pod success
Feb 13 12:34:25.950: INFO: Pod "client-containers-255adc00-4e5d-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:34:25.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-255adc00-4e5d-11ea-aba9-0242ac110007 container test-container: 
STEP: delete the pod
Feb 13 12:34:26.076: INFO: Waiting for pod client-containers-255adc00-4e5d-11ea-aba9-0242ac110007 to disappear
Feb 13 12:34:26.200: INFO: Pod client-containers-255adc00-4e5d-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:34:26.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-467sw" for this suite.
Feb 13 12:34:32.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:34:32.711: INFO: namespace: e2e-tests-containers-467sw, resource: bindings, ignored listing per whitelist
Feb 13 12:34:32.721: INFO: namespace e2e-tests-containers-467sw deletion completed in 6.509446159s

• [SLOW TEST:17.486 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:34:32.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 13 12:34:32.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-5ngvv" to be "success or failure"
Feb 13 12:34:33.027: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 29.031344ms
Feb 13 12:34:35.057: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059117796s
Feb 13 12:34:37.099: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101375684s
Feb 13 12:34:39.350: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351777501s
Feb 13 12:34:41.383: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.385293536s
Feb 13 12:34:43.405: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406589744s
Feb 13 12:34:45.427: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.429208183s
STEP: Saw pod success
Feb 13 12:34:45.427: INFO: Pod "downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:34:45.439: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007 container client-container: 
STEP: delete the pod
Feb 13 12:34:46.149: INFO: Waiting for pod downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007 to disappear
Feb 13 12:34:46.162: INFO: Pod downwardapi-volume-2fcb911e-4e5d-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:34:46.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5ngvv" for this suite.
Feb 13 12:34:52.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:34:52.454: INFO: namespace: e2e-tests-downward-api-5ngvv, resource: bindings, ignored listing per whitelist
Feb 13 12:34:52.661: INFO: namespace e2e-tests-downward-api-5ngvv deletion completed in 6.476517349s

• [SLOW TEST:19.938 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:34:52.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0213 12:35:35.395255       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 12:35:35.395: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:35:35.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fc6sw" for this suite.
Feb 13 12:35:51.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:35:51.680: INFO: namespace: e2e-tests-gc-fc6sw, resource: bindings, ignored listing per whitelist
Feb 13 12:35:51.706: INFO: namespace e2e-tests-gc-fc6sw deletion completed in 16.30619028s

• [SLOW TEST:59.046 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:35:51.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 13 12:35:51.912: INFO: PodSpec: initContainers in spec.initContainers
Feb 13 12:37:17.921: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5ee39da5-4e5d-11ea-aba9-0242ac110007", GenerateName:"", Namespace:"e2e-tests-init-container-gvlbg", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-gvlbg/pods/pod-init-5ee39da5-4e5d-11ea-aba9-0242ac110007", UID:"5ee4d367-4e5d-11ea-a994-fa163e34d433", ResourceVersion:"21534645", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717194151, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"912195351"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qv8t9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0009ea100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qv8t9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qv8t9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qv8t9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000010458), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00170e000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000010700)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000010730)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000010738), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00001073c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194152, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194151, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001bb8260), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000197030)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000197180)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://13c49cd1b1277f77d5c84367c75668e013e558af7bacbddf9cc0f9d1a5db7208"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001bb8460), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001bb8340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:37:17.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gvlbg" for this suite.
Feb 13 12:37:41.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:37:42.242: INFO: namespace: e2e-tests-init-container-gvlbg, resource: bindings, ignored listing per whitelist
Feb 13 12:37:42.274: INFO: namespace e2e-tests-init-container-gvlbg deletion completed in 24.327682756s

• [SLOW TEST:110.568 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:37:42.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 13 12:37:42.875: INFO: Waiting up to 5m0s for pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-xkw2s" to be "success or failure"
Feb 13 12:37:42.899: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.677228ms
Feb 13 12:37:44.926: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050354057s
Feb 13 12:37:46.944: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068133303s
Feb 13 12:37:49.071: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195349975s
Feb 13 12:37:51.665: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.789485958s
Feb 13 12:37:54.263: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.387674512s
STEP: Saw pod success
Feb 13 12:37:54.263: INFO: Pod "pod-a0f835d3-4e5d-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:37:54.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a0f835d3-4e5d-11ea-aba9-0242ac110007 container test-container: 
STEP: delete the pod
Feb 13 12:37:54.659: INFO: Waiting for pod pod-a0f835d3-4e5d-11ea-aba9-0242ac110007 to disappear
Feb 13 12:37:54.762: INFO: Pod pod-a0f835d3-4e5d-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:37:54.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xkw2s" for this suite.
Feb 13 12:38:00.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:38:00.895: INFO: namespace: e2e-tests-emptydir-xkw2s, resource: bindings, ignored listing per whitelist
Feb 13 12:38:01.013: INFO: namespace e2e-tests-emptydir-xkw2s deletion completed in 6.233005939s

• [SLOW TEST:18.739 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:38:01.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-abf374c3-4e5d-11ea-aba9-0242ac110007
STEP: Creating configMap with name cm-test-opt-upd-abf37540-4e5d-11ea-aba9-0242ac110007
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-abf374c3-4e5d-11ea-aba9-0242ac110007
STEP: Updating configmap cm-test-opt-upd-abf37540-4e5d-11ea-aba9-0242ac110007
STEP: Creating configMap with name cm-test-opt-create-abf37558-4e5d-11ea-aba9-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:39:28.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gjbtq" for this suite.
Feb 13 12:39:48.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:39:48.982: INFO: namespace: e2e-tests-projected-gjbtq, resource: bindings, ignored listing per whitelist
Feb 13 12:39:48.993: INFO: namespace e2e-tests-projected-gjbtq deletion completed in 20.176172046s

• [SLOW TEST:107.979 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:39:48.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 13 12:39:49.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:39:49.496: INFO: stderr: ""
Feb 13 12:39:49.496: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 12:39:49.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:39:49.711: INFO: stderr: ""
Feb 13 12:39:49.711: INFO: stdout: "update-demo-nautilus-5x6xd update-demo-nautilus-vzjjc "
Feb 13 12:39:49.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5x6xd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:39:49.895: INFO: stderr: ""
Feb 13 12:39:49.895: INFO: stdout: ""
Feb 13 12:39:49.895: INFO: update-demo-nautilus-5x6xd is created but not running
Feb 13 12:39:54.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:39:55.111: INFO: stderr: ""
Feb 13 12:39:55.111: INFO: stdout: "update-demo-nautilus-5x6xd update-demo-nautilus-vzjjc "
Feb 13 12:39:55.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5x6xd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:39:55.204: INFO: stderr: ""
Feb 13 12:39:55.204: INFO: stdout: ""
Feb 13 12:39:55.204: INFO: update-demo-nautilus-5x6xd is created but not running
Feb 13 12:40:00.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:00.366: INFO: stderr: ""
Feb 13 12:40:00.367: INFO: stdout: "update-demo-nautilus-5x6xd update-demo-nautilus-vzjjc "
Feb 13 12:40:00.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5x6xd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:00.488: INFO: stderr: ""
Feb 13 12:40:00.488: INFO: stdout: ""
Feb 13 12:40:00.488: INFO: update-demo-nautilus-5x6xd is created but not running
Feb 13 12:40:05.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:05.738: INFO: stderr: ""
Feb 13 12:40:05.739: INFO: stdout: "update-demo-nautilus-5x6xd update-demo-nautilus-vzjjc "
Feb 13 12:40:05.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5x6xd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:05.903: INFO: stderr: ""
Feb 13 12:40:05.903: INFO: stdout: "true"
Feb 13 12:40:05.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5x6xd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:06.009: INFO: stderr: ""
Feb 13 12:40:06.009: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:06.009: INFO: validating pod update-demo-nautilus-5x6xd
Feb 13 12:40:06.039: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:06.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:06.039: INFO: update-demo-nautilus-5x6xd is verified up and running
Feb 13 12:40:06.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:06.123: INFO: stderr: ""
Feb 13 12:40:06.123: INFO: stdout: "true"
Feb 13 12:40:06.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:06.225: INFO: stderr: ""
Feb 13 12:40:06.225: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:06.225: INFO: validating pod update-demo-nautilus-vzjjc
Feb 13 12:40:06.241: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:06.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:06.241: INFO: update-demo-nautilus-vzjjc is verified up and running
STEP: scaling down the replication controller
Feb 13 12:40:06.243: INFO: scanned /root for discovery docs: 
Feb 13 12:40:06.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:07.434: INFO: stderr: ""
Feb 13 12:40:07.434: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 12:40:07.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:07.654: INFO: stderr: ""
Feb 13 12:40:07.654: INFO: stdout: "update-demo-nautilus-5x6xd update-demo-nautilus-vzjjc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 13 12:40:12.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:12.850: INFO: stderr: ""
Feb 13 12:40:12.850: INFO: stdout: "update-demo-nautilus-5x6xd update-demo-nautilus-vzjjc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 13 12:40:17.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:18.050: INFO: stderr: ""
Feb 13 12:40:18.050: INFO: stdout: "update-demo-nautilus-vzjjc "
Feb 13 12:40:18.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:18.134: INFO: stderr: ""
Feb 13 12:40:18.134: INFO: stdout: "true"
Feb 13 12:40:18.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:18.224: INFO: stderr: ""
Feb 13 12:40:18.224: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:18.224: INFO: validating pod update-demo-nautilus-vzjjc
Feb 13 12:40:18.234: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:18.234: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:18.234: INFO: update-demo-nautilus-vzjjc is verified up and running
STEP: scaling up the replication controller
Feb 13 12:40:18.236: INFO: scanned /root for discovery docs: 
Feb 13 12:40:18.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:19.443: INFO: stderr: ""
Feb 13 12:40:19.443: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 12:40:19.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:19.596: INFO: stderr: ""
Feb 13 12:40:19.596: INFO: stdout: "update-demo-nautilus-vzjjc update-demo-nautilus-w5ww9 "
Feb 13 12:40:19.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:19.768: INFO: stderr: ""
Feb 13 12:40:19.768: INFO: stdout: "true"
Feb 13 12:40:19.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:20.004: INFO: stderr: ""
Feb 13 12:40:20.004: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:20.004: INFO: validating pod update-demo-nautilus-vzjjc
Feb 13 12:40:20.013: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:20.014: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:20.014: INFO: update-demo-nautilus-vzjjc is verified up and running
Feb 13 12:40:20.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5ww9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:20.171: INFO: stderr: ""
Feb 13 12:40:20.171: INFO: stdout: ""
Feb 13 12:40:20.171: INFO: update-demo-nautilus-w5ww9 is created but not running
Feb 13 12:40:25.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:25.457: INFO: stderr: ""
Feb 13 12:40:25.457: INFO: stdout: "update-demo-nautilus-vzjjc update-demo-nautilus-w5ww9 "
Feb 13 12:40:25.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:25.582: INFO: stderr: ""
Feb 13 12:40:25.582: INFO: stdout: "true"
Feb 13 12:40:25.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:25.701: INFO: stderr: ""
Feb 13 12:40:25.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:25.701: INFO: validating pod update-demo-nautilus-vzjjc
Feb 13 12:40:25.713: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:25.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:25.713: INFO: update-demo-nautilus-vzjjc is verified up and running
Feb 13 12:40:25.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5ww9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:25.841: INFO: stderr: ""
Feb 13 12:40:25.841: INFO: stdout: ""
Feb 13 12:40:25.841: INFO: update-demo-nautilus-w5ww9 is created but not running
Feb 13 12:40:30.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.072: INFO: stderr: ""
Feb 13 12:40:31.072: INFO: stdout: "update-demo-nautilus-vzjjc update-demo-nautilus-w5ww9 "
Feb 13 12:40:31.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.257: INFO: stderr: ""
Feb 13 12:40:31.257: INFO: stdout: "true"
Feb 13 12:40:31.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vzjjc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.384: INFO: stderr: ""
Feb 13 12:40:31.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:31.385: INFO: validating pod update-demo-nautilus-vzjjc
Feb 13 12:40:31.412: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:31.412: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:31.412: INFO: update-demo-nautilus-vzjjc is verified up and running
Feb 13 12:40:31.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5ww9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.520: INFO: stderr: ""
Feb 13 12:40:31.520: INFO: stdout: "true"
Feb 13 12:40:31.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w5ww9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.603: INFO: stderr: ""
Feb 13 12:40:31.604: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:40:31.604: INFO: validating pod update-demo-nautilus-w5ww9
Feb 13 12:40:31.611: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:40:31.612: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:40:31.612: INFO: update-demo-nautilus-w5ww9 is verified up and running
STEP: using delete to clean up resources
Feb 13 12:40:31.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.694: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:40:31.694: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 13 12:40:31.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ggmlj'
Feb 13 12:40:31.843: INFO: stderr: "No resources found.\n"
Feb 13 12:40:31.843: INFO: stdout: ""
Feb 13 12:40:31.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ggmlj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 13 12:40:32.011: INFO: stderr: ""
Feb 13 12:40:32.011: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:40:32.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ggmlj" for this suite.
Feb 13 12:40:56.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:40:56.146: INFO: namespace: e2e-tests-kubectl-ggmlj, resource: bindings, ignored listing per whitelist
Feb 13 12:40:56.263: INFO: namespace e2e-tests-kubectl-ggmlj deletion completed in 24.229079338s

• [SLOW TEST:67.270 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:40:56.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-147d705e-4e5e-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 12:40:56.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-4dtcq" to be "success or failure"
Feb 13 12:40:56.784: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 36.032596ms
Feb 13 12:40:58.811: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062824884s
Feb 13 12:41:00.841: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092852977s
Feb 13 12:41:03.626: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877367576s
Feb 13 12:41:05.634: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.886102169s
Feb 13 12:41:07.654: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.905663046s
STEP: Saw pod success
Feb 13 12:41:07.654: INFO: Pod "pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:41:07.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 12:41:07.872: INFO: Waiting for pod pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007 to disappear
Feb 13 12:41:07.894: INFO: Pod pod-projected-configmaps-1484e558-4e5e-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:41:07.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4dtcq" for this suite.
Feb 13 12:41:15.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:41:16.031: INFO: namespace: e2e-tests-projected-4dtcq, resource: bindings, ignored listing per whitelist
Feb 13 12:41:16.139: INFO: namespace e2e-tests-projected-4dtcq deletion completed in 8.233217755s

• [SLOW TEST:19.875 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:41:16.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 13 12:41:16.278: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 13 12:41:16.391: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 13 12:41:21.406: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 13 12:41:27.427: INFO: Creating deployment "test-rolling-update-deployment"
Feb 13 12:41:27.443: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 13 12:41:27.471: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 13 12:41:29.708: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 13 12:41:29.973: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 12:41:32.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 12:41:34.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 12:41:36.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 12:41:37.997: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194497, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717194487, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 12:41:39.990: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 13 12:41:40.009: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-bmhsb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bmhsb/deployments/test-rolling-update-deployment,UID:26dfd086-4e5e-11ea-a994-fa163e34d433,ResourceVersion:21535176,Generation:1,CreationTimestamp:2020-02-13 12:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-13 12:41:27 +0000 UTC 2020-02-13 12:41:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-13 12:41:38 +0000 UTC 2020-02-13 12:41:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 13 12:41:40.016: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-bmhsb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bmhsb/replicasets/test-rolling-update-deployment-75db98fb4c,UID:26eb77e7-4e5e-11ea-a994-fa163e34d433,ResourceVersion:21535166,Generation:1,CreationTimestamp:2020-02-13 12:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 26dfd086-4e5e-11ea-a994-fa163e34d433 0xc001f15337 0xc001f15338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 13 12:41:40.016: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 13 12:41:40.017: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-bmhsb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bmhsb/replicasets/test-rolling-update-controller,UID:203a9695-4e5e-11ea-a994-fa163e34d433,ResourceVersion:21535175,Generation:2,CreationTimestamp:2020-02-13 12:41:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 26dfd086-4e5e-11ea-a994-fa163e34d433 0xc001f151cf 0xc001f15240}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 13 12:41:40.026: INFO: Pod "test-rolling-update-deployment-75db98fb4c-rlgt5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-rlgt5,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-bmhsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bmhsb/pods/test-rolling-update-deployment-75db98fb4c-rlgt5,UID:26f5e652-4e5e-11ea-a994-fa163e34d433,ResourceVersion:21535165,Generation:0,CreationTimestamp:2020-02-13 12:41:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 26eb77e7-4e5e-11ea-a994-fa163e34d433 0xc0014e7d17 0xc0014e7d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hcw9q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hcw9q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hcw9q true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0014e7d80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0014e7da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:41:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:41:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:41:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 12:41:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-13 12:41:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-13 12:41:36 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://952c930ce59382328a1eb1f3fc3b659f1a1b44aeecd803fb409ee17e210e4edb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:41:40.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-bmhsb" for this suite.
Feb 13 12:41:48.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:41:48.547: INFO: namespace: e2e-tests-deployment-bmhsb, resource: bindings, ignored listing per whitelist
Feb 13 12:41:48.662: INFO: namespace e2e-tests-deployment-bmhsb deletion completed in 8.611907788s

• [SLOW TEST:32.523 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:41:48.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 13 12:42:03.813: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:42:05.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ccxvw" for this suite.
Feb 13 12:42:33.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:42:33.985: INFO: namespace: e2e-tests-replicaset-ccxvw, resource: bindings, ignored listing per whitelist
Feb 13 12:42:34.085: INFO: namespace e2e-tests-replicaset-ccxvw deletion completed in 28.453956288s

• [SLOW TEST:45.423 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:42:34.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-4eb91abf-4e5e-11ea-aba9-0242ac110007
STEP: Creating secret with name secret-projected-all-test-volume-4eb91a98-4e5e-11ea-aba9-0242ac110007
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 13 12:42:34.323: INFO: Waiting up to 5m0s for pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-4lz8n" to be "success or failure"
Feb 13 12:42:34.541: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 217.899464ms
Feb 13 12:42:36.558: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235306943s
Feb 13 12:42:38.604: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280905291s
Feb 13 12:42:40.737: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413861172s
Feb 13 12:42:42.756: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432882375s
Feb 13 12:42:46.263: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.940131128s
Feb 13 12:42:48.293: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.970171721s
STEP: Saw pod success
Feb 13 12:42:48.293: INFO: Pod "projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:42:48.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007 container projected-all-volume-test: 
STEP: delete the pod
Feb 13 12:42:48.821: INFO: Waiting for pod projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007 to disappear
Feb 13 12:42:48.828: INFO: Pod projected-volume-4eb91a28-4e5e-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:42:48.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4lz8n" for this suite.
Feb 13 12:42:54.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:42:54.921: INFO: namespace: e2e-tests-projected-4lz8n, resource: bindings, ignored listing per whitelist
Feb 13 12:42:55.118: INFO: namespace e2e-tests-projected-4lz8n deletion completed in 6.283463842s

• [SLOW TEST:21.033 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:42:55.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 13 12:42:55.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6p7md'
Feb 13 12:42:57.740: INFO: stderr: ""
Feb 13 12:42:57.741: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 13 12:42:59.585: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:42:59.585: INFO: Found 0 / 1
Feb 13 12:42:59.906: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:42:59.906: INFO: Found 0 / 1
Feb 13 12:43:00.756: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:00.756: INFO: Found 0 / 1
Feb 13 12:43:01.765: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:01.765: INFO: Found 0 / 1
Feb 13 12:43:03.142: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:03.143: INFO: Found 0 / 1
Feb 13 12:43:03.923: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:03.924: INFO: Found 0 / 1
Feb 13 12:43:05.042: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:05.042: INFO: Found 0 / 1
Feb 13 12:43:05.765: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:05.765: INFO: Found 0 / 1
Feb 13 12:43:06.758: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:06.758: INFO: Found 0 / 1
Feb 13 12:43:07.755: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:07.755: INFO: Found 1 / 1
Feb 13 12:43:07.755: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 13 12:43:07.761: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:07.761: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 13 12:43:07.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fql5j --namespace=e2e-tests-kubectl-6p7md -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 13 12:43:07.992: INFO: stderr: ""
Feb 13 12:43:07.993: INFO: stdout: "pod/redis-master-fql5j patched\n"
STEP: checking annotations
Feb 13 12:43:08.007: INFO: Selector matched 1 pods for map[app:redis]
Feb 13 12:43:08.007: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:43:08.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6p7md" for this suite.
Feb 13 12:43:32.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:43:32.135: INFO: namespace: e2e-tests-kubectl-6p7md, resource: bindings, ignored listing per whitelist
Feb 13 12:43:32.160: INFO: namespace e2e-tests-kubectl-6p7md deletion completed in 24.147608572s

• [SLOW TEST:37.042 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:43:32.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7150fb70-4e5e-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 13 12:43:32.354: INFO: Waiting up to 5m0s for pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-nqk8z" to be "success or failure"
Feb 13 12:43:32.378: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.395622ms
Feb 13 12:43:34.409: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055217035s
Feb 13 12:43:36.425: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071229397s
Feb 13 12:43:39.342: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.987849627s
Feb 13 12:43:41.364: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009693384s
Feb 13 12:43:43.379: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.024713647s
Feb 13 12:43:45.479: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.124463953s
STEP: Saw pod success
Feb 13 12:43:45.479: INFO: Pod "pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:43:45.486: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007 container secret-env-test: 
STEP: delete the pod
Feb 13 12:43:45.752: INFO: Waiting for pod pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007 to disappear
Feb 13 12:43:45.765: INFO: Pod pod-secrets-7153619b-4e5e-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:43:45.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nqk8z" for this suite.
Feb 13 12:43:51.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:43:51.972: INFO: namespace: e2e-tests-secrets-nqk8z, resource: bindings, ignored listing per whitelist
Feb 13 12:43:52.217: INFO: namespace e2e-tests-secrets-nqk8z deletion completed in 6.401540535s

• [SLOW TEST:20.056 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:43:52.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb 13 12:43:52.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 13 12:43:52.689: INFO: stderr: ""
Feb 13 12:43:52.689: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:43:52.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-86xzf" for this suite.
Feb 13 12:43:58.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:43:58.871: INFO: namespace: e2e-tests-kubectl-86xzf, resource: bindings, ignored listing per whitelist
Feb 13 12:43:58.945: INFO: namespace e2e-tests-kubectl-86xzf deletion completed in 6.247350919s

• [SLOW TEST:6.728 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:43:58.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rsrnf
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 13 12:44:00.649: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 13 12:44:36.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rsrnf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 13 12:44:36.845: INFO: >>> kubeConfig: /root/.kube/config
I0213 12:44:36.933971       8 log.go:172] (0xc0007ea9a0) (0xc0019e7ae0) Create stream
I0213 12:44:36.934046       8 log.go:172] (0xc0007ea9a0) (0xc0019e7ae0) Stream added, broadcasting: 1
I0213 12:44:36.940644       8 log.go:172] (0xc0007ea9a0) Reply frame received for 1
I0213 12:44:36.940684       8 log.go:172] (0xc0007ea9a0) (0xc0018090e0) Create stream
I0213 12:44:36.940699       8 log.go:172] (0xc0007ea9a0) (0xc0018090e0) Stream added, broadcasting: 3
I0213 12:44:36.942356       8 log.go:172] (0xc0007ea9a0) Reply frame received for 3
I0213 12:44:36.942476       8 log.go:172] (0xc0007ea9a0) (0xc0029101e0) Create stream
I0213 12:44:36.942499       8 log.go:172] (0xc0007ea9a0) (0xc0029101e0) Stream added, broadcasting: 5
I0213 12:44:36.952384       8 log.go:172] (0xc0007ea9a0) Reply frame received for 5
I0213 12:44:37.200534       8 log.go:172] (0xc0007ea9a0) Data frame received for 3
I0213 12:44:37.200598       8 log.go:172] (0xc0018090e0) (3) Data frame handling
I0213 12:44:37.200620       8 log.go:172] (0xc0018090e0) (3) Data frame sent
I0213 12:44:37.332992       8 log.go:172] (0xc0007ea9a0) Data frame received for 1
I0213 12:44:37.333385       8 log.go:172] (0xc0019e7ae0) (1) Data frame handling
I0213 12:44:37.333502       8 log.go:172] (0xc0019e7ae0) (1) Data frame sent
I0213 12:44:37.334533       8 log.go:172] (0xc0007ea9a0) (0xc0019e7ae0) Stream removed, broadcasting: 1
I0213 12:44:37.335428       8 log.go:172] (0xc0007ea9a0) (0xc0018090e0) Stream removed, broadcasting: 3
I0213 12:44:37.335571       8 log.go:172] (0xc0007ea9a0) (0xc0029101e0) Stream removed, broadcasting: 5
I0213 12:44:37.335663       8 log.go:172] (0xc0007ea9a0) Go away received
I0213 12:44:37.335914       8 log.go:172] (0xc0007ea9a0) (0xc0019e7ae0) Stream removed, broadcasting: 1
I0213 12:44:37.335952       8 log.go:172] (0xc0007ea9a0) (0xc0018090e0) Stream removed, broadcasting: 3
I0213 12:44:37.336008       8 log.go:172] (0xc0007ea9a0) (0xc0029101e0) Stream removed, broadcasting: 5
Feb 13 12:44:37.336: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:44:37.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rsrnf" for this suite.
Feb 13 12:45:05.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:45:05.551: INFO: namespace: e2e-tests-pod-network-test-rsrnf, resource: bindings, ignored listing per whitelist
Feb 13 12:45:05.598: INFO: namespace e2e-tests-pod-network-test-rsrnf deletion completed in 28.244142918s

• [SLOW TEST:66.653 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:45:05.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 13 12:45:05.859: INFO: Waiting up to 5m0s for pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-r46k5" to be "success or failure"
Feb 13 12:45:05.938: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 79.668574ms
Feb 13 12:45:08.003: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144133071s
Feb 13 12:45:10.023: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164442374s
Feb 13 12:45:12.041: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181828549s
Feb 13 12:45:14.056: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197654768s
Feb 13 12:45:16.098: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.239355517s
STEP: Saw pod success
Feb 13 12:45:16.098: INFO: Pod "downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:45:16.105: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 13 12:45:16.294: INFO: Waiting for pod downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007 to disappear
Feb 13 12:45:16.507: INFO: Pod downward-api-a90eac78-4e5e-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:45:16.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r46k5" for this suite.
Feb 13 12:45:22.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:45:22.685: INFO: namespace: e2e-tests-downward-api-r46k5, resource: bindings, ignored listing per whitelist
Feb 13 12:45:22.752: INFO: namespace e2e-tests-downward-api-r46k5 deletion completed in 6.22645995s

• [SLOW TEST:17.154 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:45:22.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b33da728-4e5e-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 13 12:45:22.945: INFO: Waiting up to 5m0s for pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007" in namespace "e2e-tests-secrets-2rlkr" to be "success or failure"
Feb 13 12:45:22.969: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 23.36221ms
Feb 13 12:45:25.083: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137996803s
Feb 13 12:45:27.099: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154203135s
Feb 13 12:45:29.634: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68908891s
Feb 13 12:45:31.655: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.709306283s
Feb 13 12:45:34.364: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.418913543s
Feb 13 12:45:36.379: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.433692905s
STEP: Saw pod success
Feb 13 12:45:36.379: INFO: Pod "pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:45:36.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 13 12:45:36.527: INFO: Waiting for pod pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007 to disappear
Feb 13 12:45:36.543: INFO: Pod pod-secrets-b33e917c-4e5e-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:45:36.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2rlkr" for this suite.
Feb 13 12:45:44.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:45:44.835: INFO: namespace: e2e-tests-secrets-2rlkr, resource: bindings, ignored listing per whitelist
Feb 13 12:45:44.885: INFO: namespace e2e-tests-secrets-2rlkr deletion completed in 8.324067175s

• [SLOW TEST:22.133 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:45:44.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 13 12:45:45.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-chzwz" to be "success or failure"
Feb 13 12:45:45.126: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.284704ms
Feb 13 12:45:47.427: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3167205s
Feb 13 12:45:49.450: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339737175s
Feb 13 12:45:51.742: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63239231s
Feb 13 12:45:53.757: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.647514868s
Feb 13 12:45:55.778: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.668083175s
STEP: Saw pod success
Feb 13 12:45:55.778: INFO: Pod "downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:45:55.789: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007 container client-container: 
STEP: delete the pod
Feb 13 12:45:56.698: INFO: Waiting for pod downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007 to disappear
Feb 13 12:45:56.715: INFO: Pod downwardapi-volume-c0699fdd-4e5e-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:45:56.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-chzwz" for this suite.
Feb 13 12:46:02.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:46:02.878: INFO: namespace: e2e-tests-downward-api-chzwz, resource: bindings, ignored listing per whitelist
Feb 13 12:46:02.920: INFO: namespace e2e-tests-downward-api-chzwz deletion completed in 6.196164684s

• [SLOW TEST:18.034 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:46:02.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb 13 12:46:03.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:03.551: INFO: stderr: ""
Feb 13 12:46:03.551: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 12:46:03.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:03.727: INFO: stderr: ""
Feb 13 12:46:03.727: INFO: stdout: "update-demo-nautilus-kdlst update-demo-nautilus-xnsjn "
Feb 13 12:46:03.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdlst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:04.021: INFO: stderr: ""
Feb 13 12:46:04.021: INFO: stdout: ""
Feb 13 12:46:04.021: INFO: update-demo-nautilus-kdlst is created but not running
Feb 13 12:46:09.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:09.228: INFO: stderr: ""
Feb 13 12:46:09.228: INFO: stdout: "update-demo-nautilus-kdlst update-demo-nautilus-xnsjn "
Feb 13 12:46:09.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdlst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:10.386: INFO: stderr: ""
Feb 13 12:46:10.386: INFO: stdout: ""
Feb 13 12:46:10.386: INFO: update-demo-nautilus-kdlst is created but not running
Feb 13 12:46:15.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:15.622: INFO: stderr: ""
Feb 13 12:46:15.622: INFO: stdout: "update-demo-nautilus-kdlst update-demo-nautilus-xnsjn "
Feb 13 12:46:15.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdlst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:15.714: INFO: stderr: ""
Feb 13 12:46:15.714: INFO: stdout: "true"
Feb 13 12:46:15.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdlst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:15.816: INFO: stderr: ""
Feb 13 12:46:15.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:46:15.816: INFO: validating pod update-demo-nautilus-kdlst
Feb 13 12:46:15.836: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:46:15.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:46:15.836: INFO: update-demo-nautilus-kdlst is verified up and running
Feb 13 12:46:15.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnsjn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:15.942: INFO: stderr: ""
Feb 13 12:46:15.942: INFO: stdout: "true"
Feb 13 12:46:15.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnsjn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:16.067: INFO: stderr: ""
Feb 13 12:46:16.067: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 13 12:46:16.067: INFO: validating pod update-demo-nautilus-xnsjn
Feb 13 12:46:16.076: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 13 12:46:16.076: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 13 12:46:16.077: INFO: update-demo-nautilus-xnsjn is verified up and running
STEP: rolling-update to new replication controller
Feb 13 12:46:16.079: INFO: scanned /root for discovery docs: 
Feb 13 12:46:16.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:50.700: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 13 12:46:50.700: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 13 12:46:50.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:50.936: INFO: stderr: ""
Feb 13 12:46:50.936: INFO: stdout: "update-demo-kitten-62z8x update-demo-kitten-wx9n5 "
Feb 13 12:46:50.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-62z8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:51.126: INFO: stderr: ""
Feb 13 12:46:51.126: INFO: stdout: "true"
Feb 13 12:46:51.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-62z8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:51.239: INFO: stderr: ""
Feb 13 12:46:51.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 13 12:46:51.240: INFO: validating pod update-demo-kitten-62z8x
Feb 13 12:46:51.273: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 13 12:46:51.273: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 13 12:46:51.273: INFO: update-demo-kitten-62z8x is verified up and running
Feb 13 12:46:51.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wx9n5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:51.384: INFO: stderr: ""
Feb 13 12:46:51.384: INFO: stdout: "true"
Feb 13 12:46:51.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wx9n5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wffqh'
Feb 13 12:46:51.509: INFO: stderr: ""
Feb 13 12:46:51.509: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 13 12:46:51.509: INFO: validating pod update-demo-kitten-wx9n5
Feb 13 12:46:51.517: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 13 12:46:51.517: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 13 12:46:51.517: INFO: update-demo-kitten-wx9n5 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:46:51.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wffqh" for this suite.
Feb 13 12:47:17.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:47:17.660: INFO: namespace: e2e-tests-kubectl-wffqh, resource: bindings, ignored listing per whitelist
Feb 13 12:47:17.734: INFO: namespace e2e-tests-kubectl-wffqh deletion completed in 26.212329933s

• [SLOW TEST:74.812 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:47:17.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 13 12:47:18.116: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:47:35.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gh79t" for this suite.
Feb 13 12:47:44.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:47:44.252: INFO: namespace: e2e-tests-init-container-gh79t, resource: bindings, ignored listing per whitelist
Feb 13 12:47:44.280: INFO: namespace e2e-tests-init-container-gh79t deletion completed in 8.335963758s

• [SLOW TEST:26.546 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:47:44.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 13 12:47:44.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-dbfhf" to be "success or failure"
Feb 13 12:47:44.739: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.394295ms
Feb 13 12:47:47.109: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3838365s
Feb 13 12:47:49.126: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401027522s
Feb 13 12:47:51.165: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440261549s
Feb 13 12:47:53.579: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.854472824s
Feb 13 12:47:55.588: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.863193272s
STEP: Saw pod success
Feb 13 12:47:55.588: INFO: Pod "downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:47:55.593: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007 container client-container: 
STEP: delete the pod
Feb 13 12:47:56.263: INFO: Waiting for pod downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007 to disappear
Feb 13 12:47:56.290: INFO: Pod downwardapi-volume-07b7d9f4-4e5f-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:47:56.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dbfhf" for this suite.
Feb 13 12:48:02.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:48:02.387: INFO: namespace: e2e-tests-projected-dbfhf, resource: bindings, ignored listing per whitelist
Feb 13 12:48:02.632: INFO: namespace e2e-tests-projected-dbfhf deletion completed in 6.319270989s

• [SLOW TEST:18.352 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:48:02.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 13 12:48:02.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-p6kw5" to be "success or failure"
Feb 13 12:48:02.843: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.952899ms
Feb 13 12:48:04.858: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028656404s
Feb 13 12:48:06.878: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047973874s
Feb 13 12:48:08.931: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100949214s
Feb 13 12:48:11.547: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.717208215s
Feb 13 12:48:13.560: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.729971546s
STEP: Saw pod success
Feb 13 12:48:13.560: INFO: Pod "downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:48:13.563: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007 container client-container: 
STEP: delete the pod
Feb 13 12:48:14.196: INFO: Waiting for pod downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007 to disappear
Feb 13 12:48:14.209: INFO: Pod downwardapi-volume-128300c3-4e5f-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:48:14.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p6kw5" for this suite.
Feb 13 12:48:20.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:48:20.283: INFO: namespace: e2e-tests-projected-p6kw5, resource: bindings, ignored listing per whitelist
Feb 13 12:48:20.569: INFO: namespace e2e-tests-projected-p6kw5 deletion completed in 6.349543145s

• [SLOW TEST:17.938 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:48:20.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0213 12:48:31.255856       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 12:48:31.255: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:48:31.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nlr66" for this suite.
Feb 13 12:48:37.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:48:37.428: INFO: namespace: e2e-tests-gc-nlr66, resource: bindings, ignored listing per whitelist
Feb 13 12:48:37.430: INFO: namespace e2e-tests-gc-nlr66 deletion completed in 6.166230988s

• [SLOW TEST:16.860 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:48:37.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 13 12:48:47.745: INFO: Waiting up to 5m0s for pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007" in namespace "e2e-tests-pods-wkz2m" to be "success or failure"
Feb 13 12:48:47.774: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 29.067036ms
Feb 13 12:48:49.795: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050108028s
Feb 13 12:48:51.813: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067655776s
Feb 13 12:48:54.707: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.962171297s
Feb 13 12:48:56.719: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.974099896s
Feb 13 12:48:58.736: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.991303063s
STEP: Saw pod success
Feb 13 12:48:58.736: INFO: Pod "client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:48:58.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007 container env3cont: 
STEP: delete the pod
Feb 13 12:48:58.851: INFO: Waiting for pod client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007 to disappear
Feb 13 12:48:58.869: INFO: Pod client-envvars-2d479eea-4e5f-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:48:58.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wkz2m" for this suite.
Feb 13 12:49:44.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:49:45.026: INFO: namespace: e2e-tests-pods-wkz2m, resource: bindings, ignored listing per whitelist
Feb 13 12:49:45.072: INFO: namespace e2e-tests-pods-wkz2m deletion completed in 46.193352677s

• [SLOW TEST:67.641 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:49:45.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-zhcwx
Feb 13 12:49:57.463: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-zhcwx
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 12:49:57.469: INFO: Initial restart count of pod liveness-exec is 0
Feb 13 12:50:54.171: INFO: Restart count of pod e2e-tests-container-probe-zhcwx/liveness-exec is now 1 (56.701880891s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:50:54.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zhcwx" for this suite.
Feb 13 12:51:00.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:51:00.710: INFO: namespace: e2e-tests-container-probe-zhcwx, resource: bindings, ignored listing per whitelist
Feb 13 12:51:00.754: INFO: namespace e2e-tests-container-probe-zhcwx deletion completed in 6.507235182s

• [SLOW TEST:75.682 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:51:00.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-mp5gb
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-mp5gb
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-mp5gb
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-mp5gb
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-mp5gb
Feb 13 12:51:15.155: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-mp5gb, name: ss-0, uid: 842da1bb-4e5f-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 13 12:51:22.511: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-mp5gb, name: ss-0, uid: 842da1bb-4e5f-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 13 12:51:22.638: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-mp5gb, name: ss-0, uid: 842da1bb-4e5f-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 13 12:51:22.667: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-mp5gb
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-mp5gb
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-mp5gb and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 13 12:51:35.801: INFO: Deleting all statefulset in ns e2e-tests-statefulset-mp5gb
Feb 13 12:51:35.809: INFO: Scaling statefulset ss to 0
Feb 13 12:51:55.875: INFO: Waiting for statefulset status.replicas updated to 0
Feb 13 12:51:55.883: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:51:55.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-mp5gb" for this suite.
Feb 13 12:52:04.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:52:04.083: INFO: namespace: e2e-tests-statefulset-mp5gb, resource: bindings, ignored listing per whitelist
Feb 13 12:52:04.176: INFO: namespace e2e-tests-statefulset-mp5gb deletion completed in 8.24345465s

• [SLOW TEST:63.422 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:52:04.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0213 12:52:35.194704       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 12:52:35.194: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:52:35.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-kdmcb" for this suite.
Feb 13 12:52:45.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:52:45.467: INFO: namespace: e2e-tests-gc-kdmcb, resource: bindings, ignored listing per whitelist
Feb 13 12:52:45.933: INFO: namespace e2e-tests-gc-kdmcb deletion completed in 10.73279541s

• [SLOW TEST:41.757 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:52:45.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb 13 12:52:47.054: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 13 12:52:47.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:52:47.577: INFO: stderr: ""
Feb 13 12:52:47.577: INFO: stdout: "service/redis-slave created\n"
Feb 13 12:52:47.578: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 13 12:52:47.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:52:48.042: INFO: stderr: ""
Feb 13 12:52:48.043: INFO: stdout: "service/redis-master created\n"
Feb 13 12:52:48.043: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 13 12:52:48.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:52:48.597: INFO: stderr: ""
Feb 13 12:52:48.597: INFO: stdout: "service/frontend created\n"
Feb 13 12:52:48.598: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 13 12:52:48.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:52:49.088: INFO: stderr: ""
Feb 13 12:52:49.088: INFO: stdout: "deployment.extensions/frontend created\n"
Feb 13 12:52:49.089: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 13 12:52:49.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:52:49.464: INFO: stderr: ""
Feb 13 12:52:49.464: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb 13 12:52:49.465: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 13 12:52:49.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:52:50.161: INFO: stderr: ""
Feb 13 12:52:50.161: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb 13 12:52:50.161: INFO: Waiting for all frontend pods to be Running.
Feb 13 12:53:20.212: INFO: Waiting for frontend to serve content.
Feb 13 12:53:23.368: INFO: Trying to add a new entry to the guestbook.
Feb 13 12:53:23.524: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 13 12:53:23.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:53:25.747: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:53:25.748: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 12:53:25.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:53:26.037: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:53:26.037: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 12:53:26.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:53:26.295: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:53:26.295: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 12:53:26.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:53:26.507: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:53:26.507: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 12:53:26.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:53:26.672: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:53:26.672: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 13 12:53:26.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4mc4v'
Feb 13 12:53:26.993: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 13 12:53:26.993: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:53:26.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4mc4v" for this suite.
Feb 13 12:54:15.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:54:15.278: INFO: namespace: e2e-tests-kubectl-4mc4v, resource: bindings, ignored listing per whitelist
Feb 13 12:54:15.322: INFO: namespace e2e-tests-kubectl-4mc4v deletion completed in 48.319668172s

• [SLOW TEST:89.388 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:54:15.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wcv7p;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wcv7p;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wcv7p.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 64.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.64_udp@PTR;check="$$(dig +tcp +noall +answer +search 64.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.64_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wcv7p;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wcv7p;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wcv7p.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wcv7p.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 64.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.64_udp@PTR;check="$$(dig +tcp +noall +answer +search 64.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.64_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 13 12:54:34.123: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.142: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.162: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wcv7p from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.191: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wcv7p from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.203: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.215: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.227: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.247: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.255: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.263: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.276: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.286: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.292: INFO: Unable to read 10.99.208.64_udp@PTR from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.313: INFO: Unable to read 10.99.208.64_tcp@PTR from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.328: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.337: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.347: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wcv7p from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.353: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wcv7p from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.361: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.366: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.371: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.378: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.382: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.453: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.471: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.483: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.498: INFO: Unable to read 10.99.208.64_udp@PTR from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.507: INFO: Unable to read 10.99.208.64_tcp@PTR from pod e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007: the server could not find the requested resource (get pods dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007)
Feb 13 12:54:34.507: INFO: Lookups using e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-wcv7p wheezy_tcp@dns-test-service.e2e-tests-dns-wcv7p wheezy_udp@dns-test-service.e2e-tests-dns-wcv7p.svc wheezy_tcp@dns-test-service.e2e-tests-dns-wcv7p.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.99.208.64_udp@PTR 10.99.208.64_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wcv7p jessie_tcp@dns-test-service.e2e-tests-dns-wcv7p jessie_udp@dns-test-service.e2e-tests-dns-wcv7p.svc jessie_tcp@dns-test-service.e2e-tests-dns-wcv7p.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wcv7p.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wcv7p.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.99.208.64_udp@PTR 10.99.208.64_tcp@PTR]

Feb 13 12:54:40.830: INFO: DNS probes using e2e-tests-dns-wcv7p/dns-test-f0dd09ed-4e5f-11ea-aba9-0242ac110007 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:54:45.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-wcv7p" for this suite.
Feb 13 12:54:58.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:54:58.457: INFO: namespace: e2e-tests-dns-wcv7p, resource: bindings, ignored listing per whitelist
Feb 13 12:54:58.650: INFO: namespace e2e-tests-dns-wcv7p deletion completed in 12.771059033s

• [SLOW TEST:43.328 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:54:58.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 13 12:54:59.397: INFO: Waiting up to 5m0s for pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-zf6mg" to be "success or failure"
Feb 13 12:54:59.433: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 35.244279ms
Feb 13 12:55:03.108: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710984734s
Feb 13 12:55:05.124: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.726806941s
Feb 13 12:55:07.175: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.777330481s
Feb 13 12:55:09.194: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.796847593s
Feb 13 12:55:11.963: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.565157192s
Feb 13 12:55:13.986: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.588562972s
Feb 13 12:55:16.065: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.667498245s
Feb 13 12:55:19.040: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.64220212s
STEP: Saw pod success
Feb 13 12:55:19.040: INFO: Pod "downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:55:19.050: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 13 12:55:19.512: INFO: Waiting for pod downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007 to disappear
Feb 13 12:55:19.680: INFO: Pod downward-api-0aba68a6-4e60-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:55:19.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zf6mg" for this suite.
Feb 13 12:55:25.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:55:25.977: INFO: namespace: e2e-tests-downward-api-zf6mg, resource: bindings, ignored listing per whitelist
Feb 13 12:55:26.164: INFO: namespace e2e-tests-downward-api-zf6mg deletion completed in 6.468087524s

• [SLOW TEST:27.513 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:55:26.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 13 12:55:26.430: INFO: Waiting up to 5m0s for pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007" in namespace "e2e-tests-downward-api-pn2mp" to be "success or failure"
Feb 13 12:55:26.449: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.325094ms
Feb 13 12:55:28.514: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084004752s
Feb 13 12:55:30.569: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138151654s
Feb 13 12:55:33.543: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.112985776s
Feb 13 12:55:35.574: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.143654258s
Feb 13 12:55:37.598: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.167484327s
STEP: Saw pod success
Feb 13 12:55:37.598: INFO: Pod "downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 12:55:37.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 13 12:55:38.652: INFO: Waiting for pod downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007 to disappear
Feb 13 12:55:38.669: INFO: Pod downward-api-1adedc3c-4e60-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 12:55:38.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pn2mp" for this suite.
Feb 13 12:55:46.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 12:55:46.901: INFO: namespace: e2e-tests-downward-api-pn2mp, resource: bindings, ignored listing per whitelist
Feb 13 12:55:46.966: INFO: namespace e2e-tests-downward-api-pn2mp deletion completed in 8.284293153s

• [SLOW TEST:20.802 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 12:55:46.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 13 12:55:50.061: INFO: Pod name wrapped-volume-race-28e3d7e5-4e60-11ea-aba9-0242ac110007: Found 0 pods out of 5
Feb 13 12:55:55.079: INFO: Pod name wrapped-volume-race-28e3d7e5-4e60-11ea-aba9-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-28e3d7e5-4e60-11ea-aba9-0242ac110007 in namespace e2e-tests-emptydir-wrapper-fx9fl, will wait for the garbage collector to delete the pods
Feb 13 12:57:57.323: INFO: Deleting ReplicationController wrapped-volume-race-28e3d7e5-4e60-11ea-aba9-0242ac110007 took: 62.112727ms
Feb 13 12:57:57.824: INFO: Terminating ReplicationController wrapped-volume-race-28e3d7e5-4e60-11ea-aba9-0242ac110007 pods took: 500.405413ms
STEP: Creating RC which spawns configmap-volume pods
Feb 13 12:58:44.020: INFO: Pod name wrapped-volume-race-908d988a-4e60-11ea-aba9-0242ac110007: Found 0 pods out of 5
Feb 13 12:58:49.043: INFO: Pod name wrapped-volume-race-908d988a-4e60-11ea-aba9-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-908d988a-4e60-11ea-aba9-0242ac110007 in namespace e2e-tests-emptydir-wrapper-fx9fl, will wait for the garbage collector to delete the pods
Feb 13 13:01:23.331: INFO: Deleting ReplicationController wrapped-volume-race-908d988a-4e60-11ea-aba9-0242ac110007 took: 32.155831ms
Feb 13 13:01:23.732: INFO: Terminating ReplicationController wrapped-volume-race-908d988a-4e60-11ea-aba9-0242ac110007 pods took: 400.522502ms
STEP: Creating RC which spawns configmap-volume pods
Feb 13 13:02:23.457: INFO: Pod name wrapped-volume-race-1373134b-4e61-11ea-aba9-0242ac110007: Found 0 pods out of 5
Feb 13 13:02:28.515: INFO: Pod name wrapped-volume-race-1373134b-4e61-11ea-aba9-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1373134b-4e61-11ea-aba9-0242ac110007 in namespace e2e-tests-emptydir-wrapper-fx9fl, will wait for the garbage collector to delete the pods
Feb 13 13:04:54.827: INFO: Deleting ReplicationController wrapped-volume-race-1373134b-4e61-11ea-aba9-0242ac110007 took: 35.339564ms
Feb 13 13:04:55.128: INFO: Terminating ReplicationController wrapped-volume-race-1373134b-4e61-11ea-aba9-0242ac110007 pods took: 300.533253ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:05:54.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-fx9fl" for this suite.
Feb 13 13:06:02.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:06:02.325: INFO: namespace: e2e-tests-emptydir-wrapper-fx9fl, resource: bindings, ignored listing per whitelist
Feb 13 13:06:02.481: INFO: namespace e2e-tests-emptydir-wrapper-fx9fl deletion completed in 8.232344054s

• [SLOW TEST:615.514 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:06:02.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 13 13:06:02.763: INFO: Creating deployment "test-recreate-deployment"
Feb 13 13:06:02.776: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 13 13:06:02.788: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 13 13:06:05.063: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 13 13:06:05.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:08.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:10.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:11.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:13.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:16.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:17.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:19.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:21.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195963, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717195962, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 13 13:06:23.097: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 13 13:06:23.146: INFO: Updating deployment test-recreate-deployment
Feb 13 13:06:23.146: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 13 13:06:24.076: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-tggmh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tggmh/deployments/test-recreate-deployment,UID:963dd126-4e61-11ea-a994-fa163e34d433,ResourceVersion:21538427,Generation:2,CreationTimestamp:2020-02-13 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-13 13:06:23 +0000 UTC 2020-02-13 13:06:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-13 13:06:24 +0000 UTC 2020-02-13 13:06:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 13 13:06:24.178: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-tggmh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tggmh/replicasets/test-recreate-deployment-589c4bfd,UID:a2856cc6-4e61-11ea-a994-fa163e34d433,ResourceVersion:21538422,Generation:1,CreationTimestamp:2020-02-13 13:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 963dd126-4e61-11ea-a994-fa163e34d433 0xc0019ba61f 0xc0019ba630}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 13 13:06:24.178: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 13 13:06:24.178: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-tggmh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-tggmh/replicasets/test-recreate-deployment-5bf7f65dc,UID:964216d1-4e61-11ea-a994-fa163e34d433,ResourceVersion:21538414,Generation:2,CreationTimestamp:2020-02-13 13:06:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 963dd126-4e61-11ea-a994-fa163e34d433 0xc0019ba6f0 0xc0019ba6f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 13 13:06:24.200: INFO: Pod "test-recreate-deployment-589c4bfd-lr94m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-lr94m,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-tggmh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-tggmh/pods/test-recreate-deployment-589c4bfd-lr94m,UID:a29081bf-4e61-11ea-a994-fa163e34d433,ResourceVersion:21538426,Generation:0,CreationTimestamp:2020-02-13 13:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd a2856cc6-4e61-11ea-a994-fa163e34d433 0xc002a4b2bf 0xc002a4b2e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-s68rr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s68rr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-s68rr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a4b360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a4b380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:06:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:06:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:06:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 13:06:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-13 13:06:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:06:24.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-tggmh" for this suite.
Feb 13 13:06:34.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:06:34.395: INFO: namespace: e2e-tests-deployment-tggmh, resource: bindings, ignored listing per whitelist
Feb 13 13:06:34.395: INFO: namespace e2e-tests-deployment-tggmh deletion completed in 10.181067688s

• [SLOW TEST:31.914 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:06:34.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-7qzl2
Feb 13 13:06:48.717: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-7qzl2
STEP: checking the pod's current state and verifying that restartCount is present
Feb 13 13:06:48.723: INFO: Initial restart count of pod liveness-http is 0
Feb 13 13:07:05.258: INFO: Restart count of pod e2e-tests-container-probe-7qzl2/liveness-http is now 1 (16.534802199s elapsed)
Feb 13 13:07:25.186: INFO: Restart count of pod e2e-tests-container-probe-7qzl2/liveness-http is now 2 (36.463079155s elapsed)
Feb 13 13:07:45.345: INFO: Restart count of pod e2e-tests-container-probe-7qzl2/liveness-http is now 3 (56.622053031s elapsed)
Feb 13 13:08:04.031: INFO: Restart count of pod e2e-tests-container-probe-7qzl2/liveness-http is now 4 (1m15.307334166s elapsed)
Feb 13 13:09:07.097: INFO: Restart count of pod e2e-tests-container-probe-7qzl2/liveness-http is now 5 (2m18.373524591s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:09:07.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-7qzl2" for this suite.
Feb 13 13:09:15.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:09:15.442: INFO: namespace: e2e-tests-container-probe-7qzl2, resource: bindings, ignored listing per whitelist
Feb 13 13:09:15.530: INFO: namespace e2e-tests-container-probe-7qzl2 deletion completed in 8.235819747s

• [SLOW TEST:161.134 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:09:15.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 13 13:09:25.015: INFO: 10 pods remaining
Feb 13 13:09:25.015: INFO: 8 pods has nil DeletionTimestamp
Feb 13 13:09:25.015: INFO: 
Feb 13 13:09:26.709: INFO: 0 pods remaining
Feb 13 13:09:26.709: INFO: 0 pods has nil DeletionTimestamp
Feb 13 13:09:26.709: INFO: 
STEP: Gathering metrics
W0213 13:09:31.311276       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 13 13:09:31.311: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:09:31.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-lgkr9" for this suite.
Feb 13 13:09:55.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:09:55.790: INFO: namespace: e2e-tests-gc-lgkr9, resource: bindings, ignored listing per whitelist
Feb 13 13:09:56.012: INFO: namespace e2e-tests-gc-lgkr9 deletion completed in 23.407346874s

• [SLOW TEST:40.482 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:09:56.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 13 13:09:57.901: INFO: Waiting up to 5m0s for pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007" in namespace "e2e-tests-emptydir-c9s5n" to be "success or failure"
Feb 13 13:09:57.933: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.874922ms
Feb 13 13:09:59.963: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061326457s
Feb 13 13:10:01.985: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08346466s
Feb 13 13:10:04.003: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1016472s
Feb 13 13:10:06.489: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588006336s
Feb 13 13:10:08.576: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.674727084s
Feb 13 13:10:10.610: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.708665028s
Feb 13 13:10:14.057: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.155325831s
STEP: Saw pod success
Feb 13 13:10:14.057: INFO: Pod "pod-225f94d7-4e62-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:10:14.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-225f94d7-4e62-11ea-aba9-0242ac110007 container test-container: 
STEP: delete the pod
Feb 13 13:10:14.734: INFO: Waiting for pod pod-225f94d7-4e62-11ea-aba9-0242ac110007 to disappear
Feb 13 13:10:14.927: INFO: Pod pod-225f94d7-4e62-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:10:14.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c9s5n" for this suite.
Feb 13 13:10:21.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:10:21.145: INFO: namespace: e2e-tests-emptydir-c9s5n, resource: bindings, ignored listing per whitelist
Feb 13 13:10:21.156: INFO: namespace e2e-tests-emptydir-c9s5n deletion completed in 6.217646208s

• [SLOW TEST:25.143 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:10:21.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 13 13:10:21.457: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.149976ms)
Feb 13 13:10:21.461: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.095066ms)
Feb 13 13:10:21.466: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.26529ms)
Feb 13 13:10:21.470: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.498697ms)
Feb 13 13:10:21.475: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.484122ms)
Feb 13 13:10:21.479: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.966648ms)
Feb 13 13:10:21.483: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.848995ms)
Feb 13 13:10:21.487: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.190735ms)
Feb 13 13:10:21.491: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.157812ms)
Feb 13 13:10:21.495: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.645087ms)
Feb 13 13:10:21.498: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.622332ms)
Feb 13 13:10:21.502: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.644897ms)
Feb 13 13:10:21.506: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.602098ms)
Feb 13 13:10:21.509: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.450925ms)
Feb 13 13:10:21.513: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.841051ms)
Feb 13 13:10:21.517: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.004793ms)
Feb 13 13:10:21.521: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.657126ms)
Feb 13 13:10:21.524: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.332074ms)
Feb 13 13:10:21.529: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.845506ms)
Feb 13 13:10:21.533: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.004292ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:10:21.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-fgcb8" for this suite.
Feb 13 13:10:27.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:10:27.768: INFO: namespace: e2e-tests-proxy-fgcb8, resource: bindings, ignored listing per whitelist
Feb 13 13:10:27.798: INFO: namespace e2e-tests-proxy-fgcb8 deletion completed in 6.261365374s

• [SLOW TEST:6.641 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:10:27.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 13 13:10:28.006: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:10:29.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-lnckp" for this suite.
Feb 13 13:10:35.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:10:35.409: INFO: namespace: e2e-tests-custom-resource-definition-lnckp, resource: bindings, ignored listing per whitelist
Feb 13 13:10:35.479: INFO: namespace e2e-tests-custom-resource-definition-lnckp deletion completed in 6.213353572s

• [SLOW TEST:7.681 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:10:35.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-38de6987-4e62-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 13:10:35.695: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-pdb2v" to be "success or failure"
Feb 13 13:10:35.711: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.69301ms
Feb 13 13:10:37.800: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105518036s
Feb 13 13:10:39.839: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144586285s
Feb 13 13:10:42.718: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.023152916s
Feb 13 13:10:45.054: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.359324746s
Feb 13 13:10:47.314: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 11.618899057s
Feb 13 13:10:49.326: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.631464976s
STEP: Saw pod success
Feb 13 13:10:49.326: INFO: Pod "pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:10:49.330: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 13:10:50.522: INFO: Waiting for pod pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007 to disappear
Feb 13 13:10:50.535: INFO: Pod pod-projected-configmaps-38e9784b-4e62-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:10:50.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pdb2v" for this suite.
Feb 13 13:10:57.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:10:57.457: INFO: namespace: e2e-tests-projected-pdb2v, resource: bindings, ignored listing per whitelist
Feb 13 13:10:57.521: INFO: namespace e2e-tests-projected-pdb2v deletion completed in 6.929159194s

• [SLOW TEST:22.042 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:10:57.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-462569a6-4e62-11ea-aba9-0242ac110007
STEP: Creating secret with name s-test-opt-upd-46256aa4-4e62-11ea-aba9-0242ac110007
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-462569a6-4e62-11ea-aba9-0242ac110007
STEP: Updating secret s-test-opt-upd-46256aa4-4e62-11ea-aba9-0242ac110007
STEP: Creating secret with name s-test-opt-create-46256aea-4e62-11ea-aba9-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:12:32.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-f5zm4" for this suite.
Feb 13 13:12:56.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:12:56.966: INFO: namespace: e2e-tests-secrets-f5zm4, resource: bindings, ignored listing per whitelist
Feb 13 13:12:57.007: INFO: namespace e2e-tests-secrets-f5zm4 deletion completed in 24.199986461s

• [SLOW TEST:119.486 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:12:57.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 13 13:13:23.484: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:23.494: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:25.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:25.510: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:27.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:27.510: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:29.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:29.512: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:31.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:31.518: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:33.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:33.535: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:35.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:35.512: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:37.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:37.512: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:39.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:39.516: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:41.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:41.515: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:43.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:43.521: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 13 13:13:45.494: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 13 13:13:45.515: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:13:45.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vqtgb" for this suite.
Feb 13 13:14:11.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:14:11.888: INFO: namespace: e2e-tests-container-lifecycle-hook-vqtgb, resource: bindings, ignored listing per whitelist
Feb 13 13:14:11.890: INFO: namespace e2e-tests-container-lifecycle-hook-vqtgb deletion completed in 26.331992265s

• [SLOW TEST:74.882 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:14:11.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-ba0801bb-4e62-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 13 13:14:12.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-wh8ns" to be "success or failure"
Feb 13 13:14:12.481: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 44.264115ms
Feb 13 13:14:14.526: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089078206s
Feb 13 13:14:16.562: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125751716s
Feb 13 13:14:20.264: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.827230485s
Feb 13 13:14:22.339: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.902655772s
Feb 13 13:14:24.633: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.196538146s
Feb 13 13:14:27.529: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.092426568s
Feb 13 13:14:29.548: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.111984814s
STEP: Saw pod success
Feb 13 13:14:29.549: INFO: Pod "pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:14:29.565: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 13 13:14:29.629: INFO: Waiting for pod pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007 to disappear
Feb 13 13:14:29.636: INFO: Pod pod-projected-secrets-ba0ab753-4e62-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:14:29.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wh8ns" for this suite.
Feb 13 13:14:36.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:14:36.140: INFO: namespace: e2e-tests-projected-wh8ns, resource: bindings, ignored listing per whitelist
Feb 13 13:14:36.253: INFO: namespace e2e-tests-projected-wh8ns deletion completed in 6.608298509s

• [SLOW TEST:24.363 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:14:36.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-c8700b66-4e62-11ea-aba9-0242ac110007
STEP: Creating secret with name s-test-opt-upd-c8700c72-4e62-11ea-aba9-0242ac110007
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c8700b66-4e62-11ea-aba9-0242ac110007
STEP: Updating secret s-test-opt-upd-c8700c72-4e62-11ea-aba9-0242ac110007
STEP: Creating secret with name s-test-opt-create-c8700c89-4e62-11ea-aba9-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:15:00.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rjp7f" for this suite.
Feb 13 13:15:26.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:15:27.132: INFO: namespace: e2e-tests-projected-rjp7f, resource: bindings, ignored listing per whitelist
Feb 13 13:15:27.161: INFO: namespace e2e-tests-projected-rjp7f deletion completed in 26.23079995s

• [SLOW TEST:50.908 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:15:27.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e6b82992-4e62-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 13:15:27.396: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-n448m" to be "success or failure"
Feb 13 13:15:27.423: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 26.934867ms
Feb 13 13:15:30.389: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.993297596s
Feb 13 13:15:32.423: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.026664955s
Feb 13 13:15:34.436: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.039500844s
Feb 13 13:15:36.456: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060336326s
Feb 13 13:15:38.814: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.418237585s
Feb 13 13:15:41.124: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.72799807s
STEP: Saw pod success
Feb 13 13:15:41.124: INFO: Pod "pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:15:41.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 13:15:41.498: INFO: Waiting for pod pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007 to disappear
Feb 13 13:15:41.517: INFO: Pod pod-projected-configmaps-e6b8d154-4e62-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:15:41.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-n448m" for this suite.
Feb 13 13:15:47.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:15:47.621: INFO: namespace: e2e-tests-projected-n448m, resource: bindings, ignored listing per whitelist
Feb 13 13:15:47.743: INFO: namespace e2e-tests-projected-n448m deletion completed in 6.214033014s

• [SLOW TEST:20.582 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:15:47.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f3230936-4e62-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 13:15:48.141: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007" in namespace "e2e-tests-projected-2tc8x" to be "success or failure"
Feb 13 13:15:48.165: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 24.068306ms
Feb 13 13:15:50.197: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056162889s
Feb 13 13:15:52.224: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083669184s
Feb 13 13:15:54.906: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.765699727s
Feb 13 13:15:57.196: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.055041411s
Feb 13 13:15:59.213: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.0725322s
Feb 13 13:16:01.230: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.089445369s
STEP: Saw pod success
Feb 13 13:16:01.230: INFO: Pod "pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:16:01.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 13 13:16:01.965: INFO: Waiting for pod pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007 to disappear
Feb 13 13:16:01.998: INFO: Pod pod-projected-configmaps-f3247182-4e62-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:16:01.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2tc8x" for this suite.
Feb 13 13:16:08.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:16:08.530: INFO: namespace: e2e-tests-projected-2tc8x, resource: bindings, ignored listing per whitelist
Feb 13 13:16:08.573: INFO: namespace e2e-tests-projected-2tc8x deletion completed in 6.540783066s

• [SLOW TEST:20.829 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:16:08.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 13 13:16:08.862: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vdrvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-vdrvw/configmaps/e2e-watch-test-resource-version,UID:ff75bd8f-4e62-11ea-a994-fa163e34d433,ResourceVersion:21539557,Generation:0,CreationTimestamp:2020-02-13 13:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 13 13:16:08.862: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-vdrvw,SelfLink:/api/v1/namespaces/e2e-tests-watch-vdrvw/configmaps/e2e-watch-test-resource-version,UID:ff75bd8f-4e62-11ea-a994-fa163e34d433,ResourceVersion:21539558,Generation:0,CreationTimestamp:2020-02-13 13:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:16:08.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-vdrvw" for this suite.
Feb 13 13:16:15.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:16:15.081: INFO: namespace: e2e-tests-watch-vdrvw, resource: bindings, ignored listing per whitelist
Feb 13 13:16:15.128: INFO: namespace e2e-tests-watch-vdrvw deletion completed in 6.145380102s

• [SLOW TEST:6.555 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:16:15.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-dfgtf/configmap-test-0352943e-4e63-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 13:16:15.286: INFO: Waiting up to 5m0s for pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-dfgtf" to be "success or failure"
Feb 13 13:16:15.293: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.031621ms
Feb 13 13:16:17.308: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021700157s
Feb 13 13:16:19.325: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039174093s
Feb 13 13:16:22.330: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.043261408s
Feb 13 13:16:24.573: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.287058733s
Feb 13 13:16:26.592: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.305953824s
Feb 13 13:16:29.762: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.475295377s
STEP: Saw pod success
Feb 13 13:16:29.762: INFO: Pod "pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:16:29.790: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007 container env-test: 
STEP: delete the pod
Feb 13 13:16:30.353: INFO: Waiting for pod pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007 to disappear
Feb 13 13:16:30.371: INFO: Pod pod-configmaps-0353be5d-4e63-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:16:30.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dfgtf" for this suite.
Feb 13 13:16:36.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:16:36.548: INFO: namespace: e2e-tests-configmap-dfgtf, resource: bindings, ignored listing per whitelist
Feb 13 13:16:36.654: INFO: namespace e2e-tests-configmap-dfgtf deletion completed in 6.276664281s

• [SLOW TEST:21.525 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:16:36.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-bznn7/configmap-test-102b85fd-4e63-11ea-aba9-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 13 13:16:36.844: INFO: Waiting up to 5m0s for pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007" in namespace "e2e-tests-configmap-bznn7" to be "success or failure"
Feb 13 13:16:36.886: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 42.643015ms
Feb 13 13:16:38.898: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054427051s
Feb 13 13:16:40.918: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073821263s
Feb 13 13:16:43.209: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.365503192s
Feb 13 13:16:45.221: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377144817s
Feb 13 13:16:47.233: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.389313283s
STEP: Saw pod success
Feb 13 13:16:47.233: INFO: Pod "pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007" satisfied condition "success or failure"
Feb 13 13:16:47.240: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007 container env-test: 
STEP: delete the pod
Feb 13 13:16:47.323: INFO: Waiting for pod pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007 to disappear
Feb 13 13:16:47.338: INFO: Pod pod-configmaps-102c873a-4e63-11ea-aba9-0242ac110007 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:16:47.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bznn7" for this suite.
Feb 13 13:16:54.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:16:55.893: INFO: namespace: e2e-tests-configmap-bznn7, resource: bindings, ignored listing per whitelist
Feb 13 13:16:55.893: INFO: namespace e2e-tests-configmap-bznn7 deletion completed in 8.549565422s

• [SLOW TEST:19.239 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:16:55.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 13 13:16:56.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-77xr8'
Feb 13 13:16:59.308: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 13 13:16:59.308: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb 13 13:17:03.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-77xr8'
Feb 13 13:17:04.395: INFO: stderr: ""
Feb 13 13:17:04.395: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:17:04.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-77xr8" for this suite.
Feb 13 13:17:10.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:17:10.849: INFO: namespace: e2e-tests-kubectl-77xr8, resource: bindings, ignored listing per whitelist
Feb 13 13:17:10.895: INFO: namespace e2e-tests-kubectl-77xr8 deletion completed in 6.486910733s

• [SLOW TEST:15.002 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:17:10.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:17:21.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-rdtgc" for this suite.
Feb 13 13:17:28.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:17:28.830: INFO: namespace: e2e-tests-emptydir-wrapper-rdtgc, resource: bindings, ignored listing per whitelist
Feb 13 13:17:28.864: INFO: namespace e2e-tests-emptydir-wrapper-rdtgc deletion completed in 7.368891385s

• [SLOW TEST:17.969 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 13 13:17:28.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-blr2s
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 13 13:17:29.443: INFO: Found 0 stateful pods, waiting for 3
Feb 13 13:17:39.457: INFO: Found 2 stateful pods, waiting for 3
Feb 13 13:17:49.538: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:17:49.538: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:17:49.538: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 13 13:17:59.460: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:17:59.460: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:17:59.460: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 13 13:18:09.492: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:18:09.492: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:18:09.492: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 13 13:18:09.642: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 13 13:18:19.750: INFO: Updating stateful set ss2
Feb 13 13:18:19.826: INFO: Waiting for Pod e2e-tests-statefulset-blr2s/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 13 13:18:32.923: INFO: Found 2 stateful pods, waiting for 3
Feb 13 13:18:43.382: INFO: Found 2 stateful pods, waiting for 3
Feb 13 13:18:52.968: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:18:52.968: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:18:52.968: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 13 13:19:02.946: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:19:02.946: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 13 13:19:02.946: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 13 13:19:03.017: INFO: Updating stateful set ss2
Feb 13 13:19:03.066: INFO: Waiting for Pod e2e-tests-statefulset-blr2s/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 13 13:19:13.226: INFO: Updating stateful set ss2
Feb 13 13:19:13.261: INFO: Waiting for StatefulSet e2e-tests-statefulset-blr2s/ss2 to complete update
Feb 13 13:19:13.261: INFO: Waiting for Pod e2e-tests-statefulset-blr2s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 13 13:19:23.284: INFO: Waiting for StatefulSet e2e-tests-statefulset-blr2s/ss2 to complete update
Feb 13 13:19:23.284: INFO: Waiting for Pod e2e-tests-statefulset-blr2s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 13 13:19:33.287: INFO: Waiting for StatefulSet e2e-tests-statefulset-blr2s/ss2 to complete update
Feb 13 13:19:33.287: INFO: Waiting for Pod e2e-tests-statefulset-blr2s/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 13 13:19:43.567: INFO: Waiting for StatefulSet e2e-tests-statefulset-blr2s/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 13 13:19:53.285: INFO: Deleting all statefulset in ns e2e-tests-statefulset-blr2s
Feb 13 13:19:53.288: INFO: Scaling statefulset ss2 to 0
Feb 13 13:20:33.341: INFO: Waiting for statefulset status.replicas updated to 0
Feb 13 13:20:33.353: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 13 13:20:33.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-blr2s" for this suite.
Feb 13 13:20:41.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 13 13:20:41.808: INFO: namespace: e2e-tests-statefulset-blr2s, resource: bindings, ignored listing per whitelist
Feb 13 13:20:41.849: INFO: namespace e2e-tests-statefulset-blr2s deletion completed in 8.346584297s

• [SLOW TEST:192.985 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSFeb 13 13:20:41.850: INFO: Running AfterSuite actions on all nodes
Feb 13 13:20:41.850: INFO: Running AfterSuite actions on node 1
Feb 13 13:20:41.850: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [k8s.io] Probing container [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:107

Ran 199 of 2164 Specs in 9208.600 seconds
FAIL! -- 198 Passed | 1 Failed | 0 Pending | 1965 Skipped --- FAIL: TestE2E (9208.88s)
FAIL