I0512 07:10:14.672189 6 e2e.go:224] Starting e2e run "a0455d6a-941f-11ea-bb6f-0242ac11001c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589267414 - Will randomize all specs Will run 201 of 2164 specs May 12 07:10:14.856: INFO: >>> kubeConfig: /root/.kube/config May 12 07:10:14.858: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 07:10:14.870: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 07:10:14.900: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 07:10:14.900: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 07:10:14.900: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 07:10:14.906: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 07:10:14.906: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 07:10:14.906: INFO: e2e test version: v1.13.12 May 12 07:10:14.907: INFO: kube-apiserver version: v1.13.12 [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:10:14.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 12 07:10:15.031: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:10:15.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-47mtk" to be "success or failure" May 12 07:10:15.048: INFO: Pod "downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.048746ms May 12 07:10:17.677: INFO: Pod "downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.641294391s May 12 07:10:19.682: INFO: Pod "downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.64587956s May 12 07:10:21.686: INFO: Pod "downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.650173803s STEP: Saw pod success May 12 07:10:21.686: INFO: Pod "downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:10:21.689: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:10:21.828: INFO: Waiting for pod downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c to disappear May 12 07:10:21.920: INFO: Pod downwardapi-volume-a0c42ba4-941f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:10:21.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-47mtk" for this suite. May 12 07:10:28.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:10:28.387: INFO: namespace: e2e-tests-downward-api-47mtk, resource: bindings, ignored listing per whitelist May 12 07:10:28.408: INFO: namespace e2e-tests-downward-api-47mtk deletion completed in 6.480287772s • [SLOW TEST:13.500 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:10:28.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-a903b432-941f-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 07:10:29.160: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-6xw2r" to be "success or failure" May 12 07:10:29.185: INFO: Pod "pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.282101ms May 12 07:10:31.474: INFO: Pod "pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313931683s May 12 07:10:33.507: INFO: Pod "pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346970262s May 12 07:10:35.791: INFO: Pod "pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.631698166s STEP: Saw pod success May 12 07:10:35.791: INFO: Pod "pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:10:35.795: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 07:10:36.066: INFO: Waiting for pod pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c to disappear May 12 07:10:36.127: INFO: Pod pod-projected-secrets-a905cdb6-941f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:10:36.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6xw2r" for this suite. May 12 07:10:44.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:10:44.396: INFO: namespace: e2e-tests-projected-6xw2r, resource: bindings, ignored listing per whitelist May 12 07:10:44.448: INFO: namespace e2e-tests-projected-6xw2r deletion completed in 8.316718022s • [SLOW TEST:16.040 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:10:44.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b26c1dcb-941f-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b26c1dcb-941f-11ea-bb6f-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:10:50.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zjdl4" for this suite. May 12 07:11:14.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:11:14.872: INFO: namespace: e2e-tests-projected-zjdl4, resource: bindings, ignored listing per whitelist May 12 07:11:15.237: INFO: namespace e2e-tests-projected-zjdl4 deletion completed in 24.392933717s • [SLOW TEST:30.789 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:11:15.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s6tb2 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 07:11:16.086: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 07:11:49.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.52:8080/dial?request=hostName&protocol=http&host=10.244.1.51&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-s6tb2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 07:11:49.219: INFO: >>> kubeConfig: /root/.kube/config I0512 07:11:49.256427 6 log.go:172] (0xc001362420) (0xc0017d7720) Create stream I0512 07:11:49.256476 6 log.go:172] (0xc001362420) (0xc0017d7720) Stream added, broadcasting: 1 I0512 07:11:49.260635 6 log.go:172] (0xc001362420) Reply frame received for 1 I0512 07:11:49.260669 6 log.go:172] (0xc001362420) (0xc000e480a0) Create stream I0512 07:11:49.260677 6 log.go:172] (0xc001362420) (0xc000e480a0) Stream added, broadcasting: 3 I0512 07:11:49.261440 6 log.go:172] (0xc001362420) Reply frame received for 3 I0512 07:11:49.261480 6 log.go:172] (0xc001362420) (0xc001896000) Create stream I0512 07:11:49.261492 6 log.go:172] (0xc001362420) (0xc001896000) Stream added, broadcasting: 5 I0512 07:11:49.262115 6 log.go:172] (0xc001362420) Reply frame received for 5 I0512 07:11:49.341719 6 log.go:172] (0xc001362420) Data frame received for 3 I0512 07:11:49.341747 6 log.go:172] (0xc000e480a0) (3) Data frame handling I0512 07:11:49.341777 6 log.go:172] (0xc000e480a0) (3) Data frame sent I0512 07:11:49.342148 6 log.go:172] (0xc001362420) Data frame received for 3 I0512 07:11:49.342168 6 log.go:172] (0xc000e480a0) (3) Data frame handling I0512 07:11:49.342338 6 log.go:172] (0xc001362420) Data frame received for 5 I0512 07:11:49.342355 6 log.go:172] (0xc001896000) (5) Data frame handling I0512 07:11:49.344146 6 log.go:172] (0xc001362420) Data frame received for 1 I0512 07:11:49.344217 6 log.go:172] (0xc0017d7720) (1) Data frame handling I0512 07:11:49.344281 6 log.go:172] (0xc0017d7720) (1) Data frame sent I0512 07:11:49.344313 6 log.go:172] (0xc001362420) (0xc0017d7720) Stream removed, broadcasting: 1 I0512 07:11:49.344327 6 log.go:172] (0xc001362420) Go away received I0512 07:11:49.344458 6 log.go:172] (0xc001362420) (0xc0017d7720) Stream removed, broadcasting: 1 I0512 07:11:49.344480 6 log.go:172] (0xc001362420) (0xc000e480a0) Stream removed, broadcasting: 3 I0512 07:11:49.344489 6 log.go:172] (0xc001362420) (0xc001896000) Stream removed, broadcasting: 5 May 12 07:11:49.344: INFO: Waiting for endpoints: map[] May 12 07:11:49.347: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.52:8080/dial?request=hostName&protocol=http&host=10.244.2.251&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-s6tb2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 07:11:49.347: INFO: >>> kubeConfig: /root/.kube/config I0512 07:11:49.378316 6 log.go:172] (0xc0011744d0) (0xc001628140) Create stream I0512 07:11:49.378337 6 log.go:172] (0xc0011744d0) (0xc001628140) Stream added, broadcasting: 1 I0512 07:11:49.380288 6 log.go:172] (0xc0011744d0) Reply frame received for 1 I0512 07:11:49.380320 6 log.go:172] (0xc0011744d0) (0xc000e48140) Create stream I0512 07:11:49.380329 6 log.go:172] (0xc0011744d0) (0xc000e48140) Stream added, broadcasting: 3 I0512 07:11:49.381033 6 log.go:172] (0xc0011744d0) Reply frame received for 3 I0512 07:11:49.381083 6 log.go:172] (0xc0011744d0) (0xc001948000) Create stream I0512 07:11:49.381098 6 log.go:172] (0xc0011744d0) (0xc001948000) Stream added, broadcasting: 5 I0512 07:11:49.382049 6 log.go:172] (0xc0011744d0) Reply frame received for 5 I0512 07:11:49.462843 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 07:11:49.462877 6 log.go:172] (0xc000e48140) (3) Data frame handling I0512 07:11:49.462893 6 log.go:172] (0xc000e48140) (3) Data frame sent I0512 07:11:49.463244 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 07:11:49.463257 6 log.go:172] (0xc000e48140) (3) Data frame handling I0512 07:11:49.463274 6 log.go:172] (0xc0011744d0) Data frame received for 5 I0512 07:11:49.463293 6 log.go:172] (0xc001948000) (5) Data frame handling I0512 07:11:49.465038 6 log.go:172] (0xc0011744d0) Data frame received for 1 I0512 07:11:49.465415 6 log.go:172] (0xc001628140) (1) Data frame handling I0512 07:11:49.465451 6 log.go:172] (0xc001628140) (1) Data frame sent I0512 07:11:49.465469 6 log.go:172] (0xc0011744d0) (0xc001628140) Stream removed, broadcasting: 1 I0512 07:11:49.465489 6 log.go:172] (0xc0011744d0) Go away received I0512 07:11:49.465589 6 log.go:172] (0xc0011744d0) (0xc001628140) Stream removed, broadcasting: 1 I0512 07:11:49.465623 6 log.go:172] (0xc0011744d0) (0xc000e48140) Stream removed, broadcasting: 3 I0512 07:11:49.465641 6 log.go:172] (0xc0011744d0) (0xc001948000) Stream removed, broadcasting: 5 May 12 07:11:49.465: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:11:49.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-s6tb2" for this suite. May 12 07:12:11.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:12:11.952: INFO: namespace: e2e-tests-pod-network-test-s6tb2, resource: bindings, ignored listing per whitelist May 12 07:12:11.994: INFO: namespace e2e-tests-pod-network-test-s6tb2 deletion completed in 22.524520162s • [SLOW TEST:56.756 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:12:11.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 07:12:12.631: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 12 07:12:12.677: INFO: Number of nodes with available pods: 0 May 12 07:12:12.677: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 07:12:13.234: INFO: Number of nodes with available pods: 0 May 12 07:12:13.234: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:14.237: INFO: Number of nodes with available pods: 0 May 12 07:12:14.237: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:15.338: INFO: Number of nodes with available pods: 0 May 12 07:12:15.338: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:16.494: INFO: Number of nodes with available pods: 0 May 12 07:12:16.494: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:17.529: INFO: Number of nodes with available pods: 0 May 12 07:12:17.529: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:18.338: INFO: Number of nodes with available pods: 0 May 12 07:12:18.338: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:19.238: INFO: Number of nodes with available pods: 1 May 12 07:12:19.238: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 12 07:12:19.344: INFO: Number of nodes with available pods: 1 May 12 07:12:19.344: INFO: Number of running nodes: 0, number of available pods: 1 May 12 07:12:20.348: INFO: Number of nodes with available pods: 0 May 12 07:12:20.348: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 12 07:12:20.530: INFO: Number of nodes with available pods: 0 May 12 07:12:20.530: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:21.535: INFO: Number of nodes with available pods: 0 May 12 07:12:21.535: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:22.739: INFO: Number of nodes with available pods: 0 May 12 07:12:22.739: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:23.535: INFO: Number of nodes with available pods: 0 May 12 07:12:23.535: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:24.597: INFO: Number of nodes with available pods: 0 May 12 07:12:24.597: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:25.778: INFO: Number of nodes with available pods: 0 May 12 07:12:25.778: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:26.803: INFO: Number of nodes with available pods: 0 May 12 07:12:26.803: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:27.932: INFO: Number of nodes with available pods: 0 May 12 07:12:27.932: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:28.667: INFO: Number of nodes with available pods: 0 May 12 07:12:28.667: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:29.856: INFO: Number of nodes with available pods: 0 May 12 07:12:29.856: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:30.535: INFO: Number of nodes with available pods: 0 May 12 07:12:30.535: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:31.533: INFO: Number of nodes with available pods: 0 May 12 07:12:31.533: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:32.534: INFO: Number of nodes with available pods: 0 May 12 07:12:32.534: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:34.627: INFO: Number of nodes with available pods: 0 May 12 07:12:34.627: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:35.534: INFO: Number of nodes with available pods: 0 May 12 07:12:35.534: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:36.751: INFO: Number of nodes with available pods: 0 May 12 07:12:36.751: INFO: Node hunter-worker is running more than one daemon pod May 12 07:12:37.532: INFO: Number of nodes with available pods: 1 May 12 07:12:37.533: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-d77zv, will wait for the garbage collector to delete the pods May 12 07:12:37.593: INFO: Deleting DaemonSet.extensions daemon-set took: 5.667846ms May 12 07:12:37.794: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.214937ms May 12 07:12:51.590: INFO: Number of nodes with available pods: 0 May 12 07:12:51.590: INFO: Number of running nodes: 0, number of available pods: 0 May 12 07:12:51.594: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-d77zv/daemonsets","resourceVersion":"10110177"},"items":null} May 12 07:12:51.596: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-d77zv/pods","resourceVersion":"10110177"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:12:51.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-d77zv" for this suite. May 12 07:12:58.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:12:58.140: INFO: namespace: e2e-tests-daemonsets-d77zv, resource: bindings, ignored listing per whitelist May 12 07:12:58.142: INFO: namespace e2e-tests-daemonsets-d77zv deletion completed in 6.416754657s • [SLOW TEST:46.148 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:12:58.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6272m.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6272m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6272m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-6272m.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-6272m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-6272m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 07:13:08.434: INFO: DNS probes using e2e-tests-dns-6272m/dns-test-02138157-9420-11ea-bb6f-0242ac11001c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:13:08.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-6272m" for this suite. May 12 07:13:14.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:13:14.585: INFO: namespace: e2e-tests-dns-6272m, resource: bindings, ignored listing per whitelist May 12 07:13:14.623: INFO: namespace e2e-tests-dns-6272m deletion completed in 6.080920317s • [SLOW TEST:16.481 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:13:14.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-f8hcj [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-f8hcj STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-f8hcj STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-f8hcj STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-f8hcj May 12 07:13:23.843: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f8hcj, name: ss-0, uid: 1007afca-9420-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 12 07:13:31.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f8hcj, name: ss-0, uid: 1007afca-9420-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 12 07:13:31.392: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f8hcj, name: ss-0, uid: 1007afca-9420-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 12 07:13:31.602: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-f8hcj STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-f8hcj STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-f8hcj and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 07:13:48.279: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f8hcj May 12 07:13:48.281: INFO: Scaling statefulset ss to 0 May 12 07:13:58.432: INFO: Waiting for statefulset status.replicas updated to 0 May 12 07:13:58.436: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:13:58.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-f8hcj" for this suite. May 12 07:14:07.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:14:07.023: INFO: namespace: e2e-tests-statefulset-f8hcj, resource: bindings, ignored listing per whitelist May 12 07:14:07.070: INFO: namespace e2e-tests-statefulset-f8hcj deletion completed in 8.503716726s • [SLOW TEST:52.447 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:14:07.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-2b8ec00f-9420-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:14:08.177: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-2pctk" to be "success or failure" May 12 07:14:08.217: INFO: Pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.034676ms May 12 07:14:10.346: INFO: Pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16814431s May 12 07:14:12.349: INFO: Pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171579195s May 12 07:14:14.561: INFO: Pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.383572858s May 12 07:14:16.564: INFO: Pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.38649158s STEP: Saw pod success May 12 07:14:16.564: INFO: Pod "pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:14:16.566: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 07:14:16.695: INFO: Waiting for pod pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c to disappear May 12 07:14:16.754: INFO: Pod pod-projected-configmaps-2bb22307-9420-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:14:16.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2pctk" for this suite. May 12 07:14:22.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:14:22.867: INFO: namespace: e2e-tests-projected-2pctk, resource: bindings, ignored listing per whitelist May 12 07:14:22.885: INFO: namespace e2e-tests-projected-2pctk deletion completed in 6.127475803s • [SLOW TEST:15.815 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:14:22.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 12 07:14:22.964: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix592855475/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:14:23.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fvf8s" for this suite. May 12 07:14:29.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:14:29.135: INFO: namespace: e2e-tests-kubectl-fvf8s, resource: bindings, ignored listing per whitelist May 12 07:14:29.162: INFO: namespace e2e-tests-kubectl-fvf8s deletion completed in 6.131444837s • [SLOW TEST:6.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:14:29.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:14:29.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-8gzs2" to be "success or failure" May 12 07:14:29.265: INFO: Pod "downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249209ms May 12 07:14:31.269: INFO: Pod "downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008221337s May 12 07:14:33.273: INFO: Pod "downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012339031s May 12 07:14:35.277: INFO: Pod "downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016024408s STEP: Saw pod success May 12 07:14:35.277: INFO: Pod "downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:14:35.279: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:14:35.353: INFO: Waiting for pod downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c to disappear May 12 07:14:35.362: INFO: Pod downwardapi-volume-3849d8d9-9420-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:14:35.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8gzs2" for this suite. May 12 07:14:43.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:14:43.390: INFO: namespace: e2e-tests-projected-8gzs2, resource: bindings, ignored listing per whitelist May 12 07:14:43.431: INFO: namespace e2e-tests-projected-8gzs2 deletion completed in 8.066207175s • [SLOW TEST:14.269 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:14:43.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:14:43.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-rhflj" to be "success or failure" May 12 07:14:43.708: INFO: Pod "downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.61723ms May 12 07:14:45.939: INFO: Pod "downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237029247s May 12 07:14:47.951: INFO: Pod "downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248930907s May 12 07:14:49.953: INFO: Pod "downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25166059s STEP: Saw pod success May 12 07:14:49.953: INFO: Pod "downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:14:49.955: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:14:49.999: INFO: Waiting for pod downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c to disappear May 12 07:14:50.021: INFO: Pod downwardapi-volume-40e0196f-9420-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:14:50.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rhflj" for this suite. May 12 07:14:56.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:14:56.122: INFO: namespace: e2e-tests-downward-api-rhflj, resource: bindings, ignored listing per whitelist May 12 07:14:56.155: INFO: namespace e2e-tests-downward-api-rhflj deletion completed in 6.130566906s • [SLOW TEST:12.724 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:14:56.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 07:14:56.861: INFO: Waiting up to 5m0s for pod "pod-4891589b-9420-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-rdvds" to be "success or failure" May 12 07:14:56.981: INFO: Pod "pod-4891589b-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 119.668659ms May 12 07:14:59.214: INFO: Pod "pod-4891589b-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352248766s May 12 07:15:01.219: INFO: Pod "pod-4891589b-9420-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357778022s May 12 07:15:03.222: INFO: Pod "pod-4891589b-9420-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.360757423s STEP: Saw pod success May 12 07:15:03.222: INFO: Pod "pod-4891589b-9420-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:15:03.225: INFO: Trying to get logs from node hunter-worker2 pod pod-4891589b-9420-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:15:03.551: INFO: Waiting for pod pod-4891589b-9420-11ea-bb6f-0242ac11001c to disappear May 12 07:15:03.572: INFO: Pod pod-4891589b-9420-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:15:03.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rdvds" for this suite. May 12 07:15:09.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:15:09.624: INFO: namespace: e2e-tests-emptydir-rdvds, resource: bindings, ignored listing per whitelist May 12 07:15:09.660: INFO: namespace e2e-tests-emptydir-rdvds deletion completed in 6.08415986s • [SLOW TEST:13.505 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:15:09.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-5083c4ee-9420-11ea-bb6f-0242ac11001c STEP: Creating configMap with name cm-test-opt-upd-5083c54b-9420-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5083c4ee-9420-11ea-bb6f-0242ac11001c STEP: Updating configmap cm-test-opt-upd-5083c54b-9420-11ea-bb6f-0242ac11001c STEP: Creating configMap with name cm-test-opt-create-5083c572-9420-11ea-bb6f-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:15:22.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wmd4d" for this suite. May 12 07:15:48.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:15:48.517: INFO: namespace: e2e-tests-configmap-wmd4d, resource: bindings, ignored listing per whitelist May 12 07:15:48.710: INFO: namespace e2e-tests-configmap-wmd4d deletion completed in 26.231453466s • [SLOW TEST:39.050 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:15:48.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 12 07:15:52.352: INFO: Pod name wrapped-volume-race-69b7be83-9420-11ea-bb6f-0242ac11001c: Found 0 pods out of 5 May 12 07:15:57.358: INFO: Pod name wrapped-volume-race-69b7be83-9420-11ea-bb6f-0242ac11001c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-69b7be83-9420-11ea-bb6f-0242ac11001c in namespace e2e-tests-emptydir-wrapper-8825p, will wait for the garbage collector to delete the pods May 12 07:18:01.460: INFO: Deleting ReplicationController wrapped-volume-race-69b7be83-9420-11ea-bb6f-0242ac11001c took: 7.440979ms May 12 07:18:02.561: INFO: Terminating ReplicationController wrapped-volume-race-69b7be83-9420-11ea-bb6f-0242ac11001c pods took: 1.100351383s STEP: Creating RC which spawns configmap-volume pods May 12 07:18:42.558: INFO: Pod name wrapped-volume-race-cf37a37e-9420-11ea-bb6f-0242ac11001c: Found 0 pods out of 5 May 12 07:18:47.567: INFO: Pod name wrapped-volume-race-cf37a37e-9420-11ea-bb6f-0242ac11001c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cf37a37e-9420-11ea-bb6f-0242ac11001c in namespace e2e-tests-emptydir-wrapper-8825p, will wait for the garbage collector to delete the pods May 12 07:21:23.652: INFO: Deleting ReplicationController wrapped-volume-race-cf37a37e-9420-11ea-bb6f-0242ac11001c took: 9.39925ms May 12 07:21:23.753: INFO: Terminating ReplicationController wrapped-volume-race-cf37a37e-9420-11ea-bb6f-0242ac11001c pods took: 100.185851ms STEP: Creating RC which spawns configmap-volume pods May 12 07:22:01.779: INFO: Pod name wrapped-volume-race-46012c0e-9421-11ea-bb6f-0242ac11001c: Found 0 pods out of 5 May 12 07:22:06.796: INFO: Pod name wrapped-volume-race-46012c0e-9421-11ea-bb6f-0242ac11001c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-46012c0e-9421-11ea-bb6f-0242ac11001c in namespace e2e-tests-emptydir-wrapper-8825p, will wait for the garbage collector to delete the pods May 12 07:24:00.878: INFO: Deleting ReplicationController wrapped-volume-race-46012c0e-9421-11ea-bb6f-0242ac11001c took: 8.000019ms May 12 07:24:00.978: INFO: Terminating ReplicationController wrapped-volume-race-46012c0e-9421-11ea-bb6f-0242ac11001c pods took: 100.242537ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:24:45.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-8825p" for this suite. May 12 07:24:57.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:24:58.061: INFO: namespace: e2e-tests-emptydir-wrapper-8825p, resource: bindings, ignored listing per whitelist May 12 07:24:58.070: INFO: namespace e2e-tests-emptydir-wrapper-8825p deletion completed in 12.097992329s • [SLOW TEST:549.359 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:24:58.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:24:58.200: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-rskvl" to be "success or failure" May 12 07:24:58.210: INFO: Pod "downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.732339ms May 12 07:25:00.311: INFO: Pod "downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110619612s May 12 07:25:02.317: INFO: Pod "downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.116354055s May 12 07:25:04.321: INFO: Pod "downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121041617s STEP: Saw pod success May 12 07:25:04.322: INFO: Pod "downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:25:04.324: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:25:04.359: INFO: Waiting for pod downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c to disappear May 12 07:25:04.362: INFO: Pod downwardapi-volume-af2afaf6-9421-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:25:04.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rskvl" for this suite. May 12 07:25:12.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:25:12.464: INFO: namespace: e2e-tests-downward-api-rskvl, resource: bindings, ignored listing per whitelist May 12 07:25:12.528: INFO: namespace e2e-tests-downward-api-rskvl deletion completed in 8.162642877s • [SLOW TEST:14.457 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:25:12.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0512 07:25:14.420676 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 07:25:14.420: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:25:14.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-49ff6" for this suite. May 12 07:25:20.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:25:20.468: INFO: namespace: e2e-tests-gc-49ff6, resource: bindings, ignored listing per whitelist May 12 07:25:20.534: INFO: namespace e2e-tests-gc-49ff6 deletion completed in 6.111018144s • [SLOW TEST:8.006 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:25:20.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 12 07:25:20.775: INFO: Waiting up to 5m0s for pod "client-containers-bc910512-9421-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-containers-4jptk" to be "success or failure" May 12 07:25:20.785: INFO: Pod "client-containers-bc910512-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.332912ms May 12 07:25:22.916: INFO: Pod "client-containers-bc910512-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141120468s May 12 07:25:24.920: INFO: Pod "client-containers-bc910512-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144362431s May 12 07:25:26.935: INFO: Pod "client-containers-bc910512-9421-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160143988s STEP: Saw pod success May 12 07:25:26.935: INFO: Pod "client-containers-bc910512-9421-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:25:26.938: INFO: Trying to get logs from node hunter-worker2 pod client-containers-bc910512-9421-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:25:27.001: INFO: Waiting for pod client-containers-bc910512-9421-11ea-bb6f-0242ac11001c to disappear May 12 07:25:27.008: INFO: Pod client-containers-bc910512-9421-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:25:27.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4jptk" for this suite. May 12 07:25:35.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:25:35.051: INFO: namespace: e2e-tests-containers-4jptk, resource: bindings, ignored listing per whitelist May 12 07:25:35.101: INFO: namespace e2e-tests-containers-4jptk deletion completed in 8.090855147s • [SLOW TEST:14.567 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:25:35.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 07:25:41.810: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c53bce5e-9421-11ea-bb6f-0242ac11001c" May 12 07:25:41.810: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c53bce5e-9421-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-pods-7mq6g" to be "terminated due to deadline exceeded" May 12 07:25:41.830: INFO: Pod "pod-update-activedeadlineseconds-c53bce5e-9421-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 19.741879ms May 12 07:25:43.833: INFO: Pod "pod-update-activedeadlineseconds-c53bce5e-9421-11ea-bb6f-0242ac11001c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.023284297s May 12 07:25:43.833: INFO: Pod "pod-update-activedeadlineseconds-c53bce5e-9421-11ea-bb6f-0242ac11001c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:25:43.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7mq6g" for this suite. May 12 07:25:50.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:25:50.074: INFO: namespace: e2e-tests-pods-7mq6g, resource: bindings, ignored listing per whitelist May 12 07:25:50.108: INFO: namespace e2e-tests-pods-7mq6g deletion completed in 6.272135621s • [SLOW TEST:15.006 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:25:50.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 07:25:50.238: INFO: Waiting up to 5m0s for pod "pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-74tgw" to be "success or failure" May 12 07:25:50.241: INFO: Pod "pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.312636ms May 12 07:25:52.245: INFO: Pod "pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00733039s May 12 07:25:54.250: INFO: Pod "pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011707219s STEP: Saw pod success May 12 07:25:54.250: INFO: Pod "pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:25:54.253: INFO: Trying to get logs from node hunter-worker pod pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:25:54.292: INFO: Waiting for pod pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c to disappear May 12 07:25:54.313: INFO: Pod pod-ce2bf2f7-9421-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:25:54.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-74tgw" for this suite. May 12 07:26:00.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:26:00.472: INFO: namespace: e2e-tests-emptydir-74tgw, resource: bindings, ignored listing per whitelist May 12 07:26:00.494: INFO: namespace e2e-tests-emptydir-74tgw deletion completed in 6.177225282s • [SLOW TEST:10.386 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:26:00.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 07:26:00.610: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:26:07.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-bqkds" for this suite. May 12 07:26:13.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:26:13.819: INFO: namespace: e2e-tests-init-container-bqkds, resource: bindings, ignored listing per whitelist May 12 07:26:13.864: INFO: namespace e2e-tests-init-container-bqkds deletion completed in 6.089775534s • [SLOW TEST:13.370 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:26:13.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-tsbw6 May 12 07:26:18.025: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-tsbw6 STEP: checking the pod's current state and verifying that restartCount is present May 12 07:26:18.029: INFO: Initial restart count of pod liveness-exec is 0 May 12 07:27:06.635: INFO: Restart count of pod e2e-tests-container-probe-tsbw6/liveness-exec is now 1 (48.606399123s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:27:06.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-tsbw6" for this suite. May 12 07:27:12.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:27:12.787: INFO: namespace: e2e-tests-container-probe-tsbw6, resource: bindings, ignored listing per whitelist May 12 07:27:12.834: INFO: namespace e2e-tests-container-probe-tsbw6 deletion completed in 6.152751726s • [SLOW TEST:58.969 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:27:12.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 12 07:27:12.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 12 07:27:13.116: INFO: stderr: "" May 12 07:27:13.116: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:27:13.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-phm58" for this suite. May 12 07:27:19.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:27:19.208: INFO: namespace: e2e-tests-kubectl-phm58, resource: bindings, ignored listing per whitelist May 12 07:27:19.235: INFO: namespace e2e-tests-kubectl-phm58 deletion completed in 6.113336992s • [SLOW TEST:6.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:27:19.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 12 07:27:19.391: INFO: Waiting up to 5m0s for pod "var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-var-expansion-pmlms" to be "success or failure" May 12 07:27:19.409: INFO: Pod "var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.970017ms May 12 07:27:21.413: INFO: Pod "var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021747965s May 12 07:27:23.472: INFO: Pod "var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080704123s STEP: Saw pod success May 12 07:27:23.472: INFO: Pod "var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:27:23.475: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 07:27:23.551: INFO: Waiting for pod var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c to disappear May 12 07:27:23.616: INFO: Pod var-expansion-0352acdc-9422-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:27:23.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-pmlms" for this suite. May 12 07:27:29.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:27:29.669: INFO: namespace: e2e-tests-var-expansion-pmlms, resource: bindings, ignored listing per whitelist May 12 07:27:29.728: INFO: namespace e2e-tests-var-expansion-pmlms deletion completed in 6.107959763s • [SLOW TEST:10.494 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:27:29.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-098bfab4-9422-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 07:27:29.843: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-4zpkw" to be "success or failure" May 12 07:27:29.883: INFO: Pod "pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.93515ms May 12 07:27:32.160: INFO: Pod "pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31754781s May 12 07:27:34.164: INFO: Pod "pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321157501s May 12 07:27:36.167: INFO: Pod "pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.32450533s STEP: Saw pod success May 12 07:27:36.167: INFO: Pod "pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:27:36.169: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 07:27:36.203: INFO: Waiting for pod pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c to disappear May 12 07:27:36.214: INFO: Pod pod-projected-secrets-098de9e1-9422-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:27:36.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4zpkw" for this suite. May 12 07:27:44.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:27:44.293: INFO: namespace: e2e-tests-projected-4zpkw, resource: bindings, ignored listing per whitelist May 12 07:27:44.315: INFO: namespace e2e-tests-projected-4zpkw deletion completed in 8.094645982s • [SLOW TEST:14.586 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:27:44.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4slc6 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 12 07:27:44.771: INFO: Found 0 stateful pods, waiting for 3 May 12 07:27:54.775: INFO: Found 2 stateful pods, waiting for 3 May 12 07:28:04.776: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 07:28:04.776: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 07:28:04.776: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 07:28:04.804: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 12 07:28:14.895: INFO: Updating stateful set ss2 May 12 07:28:14.927: INFO: Waiting for Pod e2e-tests-statefulset-4slc6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 12 07:28:25.446: INFO: Found 2 stateful pods, waiting for 3 May 12 07:28:35.451: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 07:28:35.451: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 07:28:35.451: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 12 07:28:35.475: INFO: Updating stateful set ss2 May 12 07:28:35.491: INFO: Waiting for Pod e2e-tests-statefulset-4slc6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 07:28:45.518: INFO: Updating stateful set ss2 May 12 07:28:45.529: INFO: Waiting for StatefulSet e2e-tests-statefulset-4slc6/ss2 to complete update May 12 07:28:45.529: INFO: Waiting for Pod e2e-tests-statefulset-4slc6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 07:28:55.537: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4slc6 May 12 07:28:55.539: INFO: Scaling statefulset ss2 to 0 May 12 07:29:15.555: INFO: Waiting for statefulset status.replicas updated to 0 May 12 07:29:15.558: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:29:15.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4slc6" for this suite. May 12 07:29:21.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:29:21.635: INFO: namespace: e2e-tests-statefulset-4slc6, resource: bindings, ignored listing per whitelist May 12 07:29:21.686: INFO: namespace e2e-tests-statefulset-4slc6 deletion completed in 6.110312032s • [SLOW TEST:97.371 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:29:21.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 07:29:29.861: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:29.864: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:31.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:31.869: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:33.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:33.871: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:35.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:35.878: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:37.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:37.868: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:39.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:39.869: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:41.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:41.868: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:43.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:43.867: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:45.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:45.870: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:47.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:47.869: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:49.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:49.868: INFO: Pod pod-with-prestop-exec-hook still exists May 12 07:29:51.864: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 07:29:51.868: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:29:51.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4vjpb" for this suite. May 12 07:30:14.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:30:14.097: INFO: namespace: e2e-tests-container-lifecycle-hook-4vjpb, resource: bindings, ignored listing per whitelist May 12 07:30:14.108: INFO: namespace e2e-tests-container-lifecycle-hook-4vjpb deletion completed in 22.229531029s • [SLOW TEST:52.422 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:30:14.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6b95a2ae-9422-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:30:14.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-bsvqd" to be "success or failure" May 12 07:30:14.331: INFO: Pod "pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.067679ms May 12 07:30:16.405: INFO: Pod "pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079098548s May 12 07:30:18.409: INFO: Pod "pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082561816s STEP: Saw pod success May 12 07:30:18.409: INFO: Pod "pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:30:18.411: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 07:30:18.464: INFO: Waiting for pod pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c to disappear May 12 07:30:18.475: INFO: Pod pod-projected-configmaps-6b9617fa-9422-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:30:18.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bsvqd" for this suite. May 12 07:30:26.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:30:26.728: INFO: namespace: e2e-tests-projected-bsvqd, resource: bindings, ignored listing per whitelist May 12 07:30:26.728: INFO: namespace e2e-tests-projected-bsvqd deletion completed in 8.249900993s • [SLOW TEST:12.619 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:30:26.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-732655d0-9422-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:30:33.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hlvcp" for this suite. May 12 07:30:55.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:30:55.186: INFO: namespace: e2e-tests-configmap-hlvcp, resource: bindings, ignored listing per whitelist May 12 07:30:55.218: INFO: namespace e2e-tests-configmap-hlvcp deletion completed in 22.124561183s • [SLOW TEST:28.489 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:30:55.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:30:55.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-cxl99" to be "success or failure" May 12 07:30:55.352: INFO: Pod "downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.851697ms May 12 07:30:57.589: INFO: Pod "downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259927079s May 12 07:30:59.596: INFO: Pod "downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.267022061s STEP: Saw pod success May 12 07:30:59.597: INFO: Pod "downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:30:59.598: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:30:59.631: INFO: Waiting for pod downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c to disappear May 12 07:30:59.643: INFO: Pod downwardapi-volume-84098089-9422-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:30:59.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cxl99" for this suite. May 12 07:31:07.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:31:08.309: INFO: namespace: e2e-tests-projected-cxl99, resource: bindings, ignored listing per whitelist May 12 07:31:08.347: INFO: namespace e2e-tests-projected-cxl99 deletion completed in 8.701503478s • [SLOW TEST:13.129 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:31:08.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 07:31:08.495: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-c6k4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-c6k4s/configmaps/e2e-watch-test-watch-closed,UID:8be01a01-9422-11ea-99e8-0242ac110002,ResourceVersion:10113647,Generation:0,CreationTimestamp:2020-05-12 07:31:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 07:31:08.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-c6k4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-c6k4s/configmaps/e2e-watch-test-watch-closed,UID:8be01a01-9422-11ea-99e8-0242ac110002,ResourceVersion:10113648,Generation:0,CreationTimestamp:2020-05-12 07:31:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 07:31:08.556: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-c6k4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-c6k4s/configmaps/e2e-watch-test-watch-closed,UID:8be01a01-9422-11ea-99e8-0242ac110002,ResourceVersion:10113649,Generation:0,CreationTimestamp:2020-05-12 07:31:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 07:31:08.557: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-c6k4s,SelfLink:/api/v1/namespaces/e2e-tests-watch-c6k4s/configmaps/e2e-watch-test-watch-closed,UID:8be01a01-9422-11ea-99e8-0242ac110002,ResourceVersion:10113650,Generation:0,CreationTimestamp:2020-05-12 07:31:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:31:08.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-c6k4s" for this suite. May 12 07:31:14.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:31:14.589: INFO: namespace: e2e-tests-watch-c6k4s, resource: bindings, ignored listing per whitelist May 12 07:31:14.648: INFO: namespace e2e-tests-watch-c6k4s deletion completed in 6.08163164s • [SLOW TEST:6.301 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:31:14.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 07:31:14.922: INFO: Waiting up to 5m0s for pod "pod-8fb0b875-9422-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-mvqnb" to be "success or failure" May 12 07:31:14.968: INFO: Pod "pod-8fb0b875-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.541811ms May 12 07:31:16.974: INFO: Pod "pod-8fb0b875-9422-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051415247s May 12 07:31:18.978: INFO: Pod "pod-8fb0b875-9422-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.055381485s May 12 07:31:20.982: INFO: Pod "pod-8fb0b875-9422-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059534987s STEP: Saw pod success May 12 07:31:20.982: INFO: Pod "pod-8fb0b875-9422-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:31:20.985: INFO: Trying to get logs from node hunter-worker2 pod pod-8fb0b875-9422-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:31:21.064: INFO: Waiting for pod pod-8fb0b875-9422-11ea-bb6f-0242ac11001c to disappear May 12 07:31:21.100: INFO: Pod pod-8fb0b875-9422-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:31:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mvqnb" for this suite. May 12 07:31:27.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:31:27.149: INFO: namespace: e2e-tests-emptydir-mvqnb, resource: bindings, ignored listing per whitelist May 12 07:31:27.187: INFO: namespace e2e-tests-emptydir-mvqnb deletion completed in 6.081625217s • [SLOW TEST:12.539 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:31:27.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 12 07:31:27.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tzr2w' May 12 07:31:31.712: INFO: stderr: "" May 12 07:31:31.712: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 12 07:31:32.717: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:32.717: INFO: Found 0 / 1 May 12 07:31:33.856: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:33.856: INFO: Found 0 / 1 May 12 07:31:34.716: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:34.716: INFO: Found 0 / 1 May 12 07:31:35.716: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:35.716: INFO: Found 0 / 1 May 12 07:31:36.715: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:36.715: INFO: Found 0 / 1 May 12 07:31:38.017: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:38.017: INFO: Found 1 / 1 May 12 07:31:38.017: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 07:31:38.021: INFO: Selector matched 1 pods for map[app:redis] May 12 07:31:38.021: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 12 07:31:38.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2tbp7 redis-master --namespace=e2e-tests-kubectl-tzr2w' May 12 07:31:38.135: INFO: stderr: "" May 12 07:31:38.135: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 07:31:36.804 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 07:31:36.804 # Server started, Redis version 3.2.12\n1:M 12 May 07:31:36.804 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 07:31:36.804 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 12 07:31:38.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2tbp7 redis-master --namespace=e2e-tests-kubectl-tzr2w --tail=1' May 12 07:31:38.373: INFO: stderr: "" May 12 07:31:38.373: INFO: stdout: "1:M 12 May 07:31:36.804 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 12 07:31:38.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2tbp7 redis-master --namespace=e2e-tests-kubectl-tzr2w --limit-bytes=1' May 12 07:31:38.559: INFO: stderr: "" May 12 07:31:38.559: INFO: stdout: " " STEP: exposing timestamps May 12 07:31:38.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2tbp7 redis-master --namespace=e2e-tests-kubectl-tzr2w --tail=1 --timestamps' May 12 07:31:38.678: INFO: stderr: "" May 12 07:31:38.678: INFO: stdout: "2020-05-12T07:31:36.80478036Z 1:M 12 May 07:31:36.804 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 12 07:31:41.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2tbp7 redis-master --namespace=e2e-tests-kubectl-tzr2w --since=1s' May 12 07:31:41.288: INFO: stderr: "" May 12 07:31:41.288: INFO: stdout: "" May 12 07:31:41.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2tbp7 redis-master --namespace=e2e-tests-kubectl-tzr2w --since=24h' May 12 07:31:41.394: INFO: stderr: "" May 12 07:31:41.394: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 07:31:36.804 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 07:31:36.804 # Server started, Redis version 3.2.12\n1:M 12 May 07:31:36.804 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 07:31:36.804 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 12 07:31:41.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tzr2w' May 12 07:31:41.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 07:31:41.516: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 12 07:31:41.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-tzr2w' May 12 07:31:42.205: INFO: stderr: "No resources found.\n" May 12 07:31:42.205: INFO: stdout: "" May 12 07:31:42.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-tzr2w -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 07:31:42.499: INFO: stderr: "" May 12 07:31:42.499: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:31:42.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tzr2w" for this suite. May 12 07:32:06.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:32:06.822: INFO: namespace: e2e-tests-kubectl-tzr2w, resource: bindings, ignored listing per whitelist May 12 07:32:06.863: INFO: namespace e2e-tests-kubectl-tzr2w deletion completed in 24.132638491s • [SLOW TEST:39.676 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:32:06.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 12 07:32:15.089: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:32:40.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-plvsc" for this suite. May 12 07:32:46.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:32:47.340: INFO: namespace: e2e-tests-namespaces-plvsc, resource: bindings, ignored listing per whitelist May 12 07:32:47.346: INFO: namespace e2e-tests-namespaces-plvsc deletion completed in 6.488240527s STEP: Destroying namespace "e2e-tests-nsdeletetest-nhcqq" for this suite. May 12 07:32:47.348: INFO: Namespace e2e-tests-nsdeletetest-nhcqq was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-nm54k" for this suite. May 12 07:32:53.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:32:53.453: INFO: namespace: e2e-tests-nsdeletetest-nm54k, resource: bindings, ignored listing per whitelist May 12 07:32:53.465: INFO: namespace e2e-tests-nsdeletetest-nm54k deletion completed in 6.117225216s • [SLOW TEST:46.602 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:32:53.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-trvlh STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-trvlh to expose endpoints map[] May 12 07:32:53.622: INFO: Get endpoints failed (40.658132ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 12 07:32:54.627: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-trvlh exposes endpoints map[] (1.04489057s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-trvlh STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-trvlh to expose endpoints map[pod1:[80]] May 12 07:32:59.284: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.649226567s elapsed, will retry) May 12 07:33:00.290: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-trvlh exposes endpoints map[pod1:[80]] (5.655628444s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-trvlh STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-trvlh to expose endpoints map[pod1:[80] pod2:[80]] May 12 07:33:05.159: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-trvlh exposes endpoints map[pod1:[80] pod2:[80]] (4.865683923s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-trvlh STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-trvlh to expose endpoints map[pod2:[80]] May 12 07:33:06.429: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-trvlh exposes endpoints map[pod2:[80]] (1.264721331s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-trvlh STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-trvlh to expose endpoints map[] May 12 07:33:07.575: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-trvlh exposes endpoints map[] (1.140869709s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:33:08.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-trvlh" for this suite. May 12 07:33:14.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:33:14.798: INFO: namespace: e2e-tests-services-trvlh, resource: bindings, ignored listing per whitelist May 12 07:33:14.826: INFO: namespace e2e-tests-services-trvlh deletion completed in 6.388030537s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:21.360 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:33:14.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ss2j5 May 12 07:33:20.942: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ss2j5 STEP: checking the pod's current state and verifying that restartCount is present May 12 07:33:20.944: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:37:22.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ss2j5" for this suite. May 12 07:37:28.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:37:28.268: INFO: namespace: e2e-tests-container-probe-ss2j5, resource: bindings, ignored listing per whitelist May 12 07:37:28.331: INFO: namespace e2e-tests-container-probe-ss2j5 deletion completed in 6.117244892s • [SLOW TEST:253.505 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:37:28.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 07:37:33.007: INFO: Successfully updated pod "labelsupdate6e56b110-9423-11ea-bb6f-0242ac11001c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:37:35.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c6p5s" for this suite. May 12 07:37:59.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:37:59.544: INFO: namespace: e2e-tests-downward-api-c6p5s, resource: bindings, ignored listing per whitelist May 12 07:37:59.569: INFO: namespace e2e-tests-downward-api-c6p5s deletion completed in 24.525638893s • [SLOW TEST:31.237 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:37:59.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 07:37:59.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-z6656' May 12 07:38:00.863: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 07:38:00.863: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 12 07:38:03.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-z6656' May 12 07:38:03.810: INFO: stderr: "" May 12 07:38:03.810: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:38:03.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z6656" for this suite. May 12 07:38:13.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:38:13.132: INFO: namespace: e2e-tests-kubectl-z6656, resource: bindings, ignored listing per whitelist May 12 07:38:13.180: INFO: namespace e2e-tests-kubectl-z6656 deletion completed in 9.068414146s • [SLOW TEST:13.611 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:38:13.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 07:38:13.809: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:38:20.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-td9rx" for this suite. May 12 07:39:16.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:39:16.195: INFO: namespace: e2e-tests-pods-td9rx, resource: bindings, ignored listing per whitelist May 12 07:39:16.205: INFO: namespace e2e-tests-pods-td9rx deletion completed in 56.135571838s • [SLOW TEST:63.024 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:39:16.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-frtfx in namespace e2e-tests-proxy-rxkjk I0512 07:39:16.853432 6 runners.go:184] Created replication controller with name: proxy-service-frtfx, namespace: e2e-tests-proxy-rxkjk, replica count: 1 I0512 07:39:17.903841 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:39:18.904024 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:39:19.904266 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:39:20.904456 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:39:21.904686 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:39:22.904905 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 07:39:23.905102 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 07:39:24.905523 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 07:39:25.905730 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 07:39:26.905961 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 07:39:27.906216 6 runners.go:184] proxy-service-frtfx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 07:39:27.909: INFO: setup took 11.207822038s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 12 07:39:27.917: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-rxkjk/pods/proxy-service-frtfx-tsxd8:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:39:37.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-w2rkz" to be "success or failure" May 12 07:39:37.487: INFO: Pod "downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 73.240905ms May 12 07:39:39.491: INFO: Pod "downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076984196s May 12 07:39:41.557: INFO: Pod "downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143239844s May 12 07:39:43.561: INFO: Pod "downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147339649s STEP: Saw pod success May 12 07:39:43.561: INFO: Pod "downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:39:43.564: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:39:44.613: INFO: Waiting for pod downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c to disappear May 12 07:39:44.898: INFO: Pod downwardapi-volume-bb395395-9423-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:39:44.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-w2rkz" for this suite. May 12 07:39:55.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:39:55.991: INFO: namespace: e2e-tests-downward-api-w2rkz, resource: bindings, ignored listing per whitelist May 12 07:39:57.627: INFO: namespace e2e-tests-downward-api-w2rkz deletion completed in 12.723682733s • [SLOW TEST:20.767 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:39:57.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 12 07:39:58.325: INFO: namespace e2e-tests-kubectl-kj99w May 12 07:39:58.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kj99w' May 12 07:39:58.650: INFO: stderr: "" May 12 07:39:58.650: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 07:39:59.653: INFO: Selector matched 1 pods for map[app:redis] May 12 07:39:59.653: INFO: Found 0 / 1 May 12 07:40:00.655: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:00.655: INFO: Found 0 / 1 May 12 07:40:01.762: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:01.762: INFO: Found 0 / 1 May 12 07:40:03.055: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:03.055: INFO: Found 0 / 1 May 12 07:40:03.755: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:03.755: INFO: Found 0 / 1 May 12 07:40:04.654: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:04.654: INFO: Found 0 / 1 May 12 07:40:05.807: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:05.807: INFO: Found 0 / 1 May 12 07:40:06.653: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:06.653: INFO: Found 0 / 1 May 12 07:40:07.737: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:07.737: INFO: Found 0 / 1 May 12 07:40:08.654: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:08.654: INFO: Found 1 / 1 May 12 07:40:08.654: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 07:40:08.657: INFO: Selector matched 1 pods for map[app:redis] May 12 07:40:08.657: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 07:40:08.657: INFO: wait on redis-master startup in e2e-tests-kubectl-kj99w May 12 07:40:08.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tcvcv redis-master --namespace=e2e-tests-kubectl-kj99w' May 12 07:40:08.880: INFO: stderr: "" May 12 07:40:08.880: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 May 07:40:07.506 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 May 07:40:07.506 # Server started, Redis version 3.2.12\n1:M 12 May 07:40:07.506 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 May 07:40:07.506 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 12 07:40:08.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-kj99w' May 12 07:40:09.689: INFO: stderr: "" May 12 07:40:09.690: INFO: stdout: "service/rm2 exposed\n" May 12 07:40:09.916: INFO: Service rm2 in namespace e2e-tests-kubectl-kj99w found. STEP: exposing service May 12 07:40:12.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-kj99w' May 12 07:40:12.810: INFO: stderr: "" May 12 07:40:12.810: INFO: stdout: "service/rm3 exposed\n" May 12 07:40:12.916: INFO: Service rm3 in namespace e2e-tests-kubectl-kj99w found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:40:14.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kj99w" for this suite. May 12 07:40:45.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:40:45.359: INFO: namespace: e2e-tests-kubectl-kj99w, resource: bindings, ignored listing per whitelist May 12 07:40:45.382: INFO: namespace e2e-tests-kubectl-kj99w deletion completed in 30.458129696s • [SLOW TEST:47.755 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:40:45.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 07:40:52.443: INFO: Successfully updated pod "labelsupdatee3f34b1a-9423-11ea-bb6f-0242ac11001c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:40:54.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t6sgd" for this suite. May 12 07:41:20.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:41:21.015: INFO: namespace: e2e-tests-projected-t6sgd, resource: bindings, ignored listing per whitelist May 12 07:41:21.066: INFO: namespace e2e-tests-projected-t6sgd deletion completed in 26.113160571s • [SLOW TEST:35.684 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:41:21.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 07:41:21.195: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 07:41:21.215: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 07:41:26.276: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 07:41:26.277: INFO: Creating deployment "test-rolling-update-deployment" May 12 07:41:26.280: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 07:41:26.303: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 07:41:28.311: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 07:41:28.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:41:30.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866086, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:41:32.318: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 07:41:32.326: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-7fsbs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7fsbs/deployments/test-rolling-update-deployment,UID:fc1d9161-9423-11ea-99e8-0242ac110002,ResourceVersion:10115276,Generation:1,CreationTimestamp:2020-05-12 07:41:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 07:41:26 +0000 UTC 2020-05-12 07:41:26 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 07:41:30 +0000 UTC 2020-05-12 07:41:26 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 07:41:32.329: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-7fsbs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7fsbs/replicasets/test-rolling-update-deployment-75db98fb4c,UID:fc2223e1-9423-11ea-99e8-0242ac110002,ResourceVersion:10115267,Generation:1,CreationTimestamp:2020-05-12 07:41:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fc1d9161-9423-11ea-99e8-0242ac110002 0xc001545dd7 0xc001545dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 07:41:32.329: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 07:41:32.330: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-7fsbs,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7fsbs/replicasets/test-rolling-update-controller,UID:f9161d0e-9423-11ea-99e8-0242ac110002,ResourceVersion:10115275,Generation:2,CreationTimestamp:2020-05-12 07:41:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fc1d9161-9423-11ea-99e8-0242ac110002 0xc001545d17 0xc001545d18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 07:41:32.333: INFO: Pod "test-rolling-update-deployment-75db98fb4c-sxx4v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-sxx4v,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-7fsbs,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7fsbs/pods/test-rolling-update-deployment-75db98fb4c-sxx4v,UID:fc22b993-9423-11ea-99e8-0242ac110002,ResourceVersion:10115266,Generation:0,CreationTimestamp:2020-05-12 07:41:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c fc2223e1-9423-11ea-99e8-0242ac110002 0xc00170b207 0xc00170b208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lbsn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lbsn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-lbsn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00170b2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00170b310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:41:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:41:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:41:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:41:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.96,StartTime:2020-05-12 07:41:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 07:41:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://19574d527e0238fd05cbb9abeb136da458b6c0421aac64ef9df7b9fd36cb5e6d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:41:32.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7fsbs" for this suite. May 12 07:41:40.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:41:40.436: INFO: namespace: e2e-tests-deployment-7fsbs, resource: bindings, ignored listing per whitelist May 12 07:41:40.446: INFO: namespace e2e-tests-deployment-7fsbs deletion completed in 8.109999718s • [SLOW TEST:19.380 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:41:40.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-04aad770-9424-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 07:41:40.644: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-kzcbg" to be "success or failure" May 12 07:41:40.662: INFO: Pod "pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.055296ms May 12 07:41:42.767: INFO: Pod "pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123122256s May 12 07:41:44.779: INFO: Pod "pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.135092184s May 12 07:41:46.783: INFO: Pod "pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138727652s STEP: Saw pod success May 12 07:41:46.783: INFO: Pod "pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:41:46.785: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 07:41:47.198: INFO: Waiting for pod pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c to disappear May 12 07:41:47.276: INFO: Pod pod-projected-secrets-04ac7f15-9424-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:41:47.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kzcbg" for this suite. May 12 07:41:53.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:41:53.400: INFO: namespace: e2e-tests-projected-kzcbg, resource: bindings, ignored listing per whitelist May 12 07:41:53.415: INFO: namespace e2e-tests-projected-kzcbg deletion completed in 6.135438548s • [SLOW TEST:12.969 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:41:53.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 12 07:41:53.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-92sh7' May 12 07:41:57.215: INFO: stderr: "" May 12 07:41:57.215: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 07:41:57.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92sh7' May 12 07:41:57.334: INFO: stderr: "" May 12 07:41:57.334: INFO: stdout: "update-demo-nautilus-mh256 update-demo-nautilus-nvzlg " May 12 07:41:57.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mh256 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:41:57.427: INFO: stderr: "" May 12 07:41:57.427: INFO: stdout: "" May 12 07:41:57.427: INFO: update-demo-nautilus-mh256 is created but not running May 12 07:42:02.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:02.546: INFO: stderr: "" May 12 07:42:02.546: INFO: stdout: "update-demo-nautilus-mh256 update-demo-nautilus-nvzlg " May 12 07:42:02.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mh256 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:02.646: INFO: stderr: "" May 12 07:42:02.646: INFO: stdout: "true" May 12 07:42:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mh256 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:02.758: INFO: stderr: "" May 12 07:42:02.758: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:42:02.758: INFO: validating pod update-demo-nautilus-mh256 May 12 07:42:02.763: INFO: got data: { "image": "nautilus.jpg" } May 12 07:42:02.763: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:42:02.763: INFO: update-demo-nautilus-mh256 is verified up and running May 12 07:42:02.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvzlg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:02.870: INFO: stderr: "" May 12 07:42:02.870: INFO: stdout: "true" May 12 07:42:02.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nvzlg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:02.967: INFO: stderr: "" May 12 07:42:02.967: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:42:02.967: INFO: validating pod update-demo-nautilus-nvzlg May 12 07:42:02.971: INFO: got data: { "image": "nautilus.jpg" } May 12 07:42:02.971: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:42:02.971: INFO: update-demo-nautilus-nvzlg is verified up and running STEP: rolling-update to new replication controller May 12 07:42:02.973: INFO: scanned /root for discovery docs: May 12 07:42:02.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:25.690: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 07:42:25.690: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 07:42:25.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:25.818: INFO: stderr: "" May 12 07:42:25.818: INFO: stdout: "update-demo-kitten-5qr5h update-demo-kitten-bn6qd " May 12 07:42:25.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5qr5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:25.943: INFO: stderr: "" May 12 07:42:25.943: INFO: stdout: "true" May 12 07:42:25.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5qr5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:26.050: INFO: stderr: "" May 12 07:42:26.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 07:42:26.050: INFO: validating pod update-demo-kitten-5qr5h May 12 07:42:26.054: INFO: got data: { "image": "kitten.jpg" } May 12 07:42:26.054: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 07:42:26.054: INFO: update-demo-kitten-5qr5h is verified up and running May 12 07:42:26.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bn6qd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:26.164: INFO: stderr: "" May 12 07:42:26.164: INFO: stdout: "true" May 12 07:42:26.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bn6qd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-92sh7' May 12 07:42:26.260: INFO: stderr: "" May 12 07:42:26.261: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 07:42:26.261: INFO: validating pod update-demo-kitten-bn6qd May 12 07:42:26.265: INFO: got data: { "image": "kitten.jpg" } May 12 07:42:26.265: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 07:42:26.265: INFO: update-demo-kitten-bn6qd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:42:26.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-92sh7" for this suite. May 12 07:42:50.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:42:50.364: INFO: namespace: e2e-tests-kubectl-92sh7, resource: bindings, ignored listing per whitelist May 12 07:42:50.385: INFO: namespace e2e-tests-kubectl-92sh7 deletion completed in 24.117546282s • [SLOW TEST:56.970 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:42:50.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 07:43:21.153847 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 07:43:21.153: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:43:21.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xnqw2" for this suite. May 12 07:43:27.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:43:27.173: INFO: namespace: e2e-tests-gc-xnqw2, resource: bindings, ignored listing per whitelist May 12 07:43:27.226: INFO: namespace e2e-tests-gc-xnqw2 deletion completed in 6.07016887s • [SLOW TEST:36.841 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:43:27.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 07:43:27.719: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.907956ms) May 12 07:43:27.722: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.783294ms) May 12 07:43:27.724: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.591007ms) May 12 07:43:27.727: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.539481ms) May 12 07:43:27.729: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.327186ms) May 12 07:43:27.732: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.186164ms) May 12 07:43:27.734: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.147447ms) May 12 07:43:27.736: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.335085ms) May 12 07:43:27.739: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.405669ms) May 12 07:43:27.742: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.974884ms) May 12 07:43:27.744: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.155418ms) May 12 07:43:27.747: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.844988ms) May 12 07:43:27.750: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.966068ms) May 12 07:43:27.752: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.479517ms) May 12 07:43:27.755: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.725818ms) May 12 07:43:27.757: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.510867ms) May 12 07:43:27.760: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.428517ms) May 12 07:43:27.763: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.658809ms) May 12 07:43:27.765: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.909533ms) May 12 07:43:27.767: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.431606ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:43:27.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-gq52p" for this suite. May 12 07:43:33.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:43:33.827: INFO: namespace: e2e-tests-proxy-gq52p, resource: bindings, ignored listing per whitelist May 12 07:43:33.856: INFO: namespace e2e-tests-proxy-gq52p deletion completed in 6.086559898s • [SLOW TEST:6.630 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:43:33.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:43:37.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-h55dc" for this suite. May 12 07:44:18.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:44:18.092: INFO: namespace: e2e-tests-kubelet-test-h55dc, resource: bindings, ignored listing per whitelist May 12 07:44:18.131: INFO: namespace e2e-tests-kubelet-test-h55dc deletion completed in 40.134084704s • [SLOW TEST:44.275 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:44:18.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 07:44:18.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-n84l8' May 12 07:44:18.644: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 07:44:18.644: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 12 07:44:18.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-n84l8' May 12 07:44:18.910: INFO: stderr: "" May 12 07:44:18.910: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:44:18.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-n84l8" for this suite. May 12 07:44:41.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:44:41.024: INFO: namespace: e2e-tests-kubectl-n84l8, resource: bindings, ignored listing per whitelist May 12 07:44:41.071: INFO: namespace e2e-tests-kubectl-n84l8 deletion completed in 22.120661675s • [SLOW TEST:22.939 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:44:41.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-70459d6a-9424-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:44:41.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-zkjns" to be "success or failure" May 12 07:44:41.213: INFO: Pod "pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.979976ms May 12 07:44:43.217: INFO: Pod "pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048542065s May 12 07:44:45.220: INFO: Pod "pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051735445s May 12 07:44:47.224: INFO: Pod "pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055301937s STEP: Saw pod success May 12 07:44:47.224: INFO: Pod "pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:44:47.226: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 07:44:47.255: INFO: Waiting for pod pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c to disappear May 12 07:44:47.274: INFO: Pod pod-configmaps-70464660-9424-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:44:47.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zkjns" for this suite. May 12 07:44:53.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:44:53.377: INFO: namespace: e2e-tests-configmap-zkjns, resource: bindings, ignored listing per whitelist May 12 07:44:53.390: INFO: namespace e2e-tests-configmap-zkjns deletion completed in 6.11241767s • [SLOW TEST:12.319 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:44:53.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wbhkz STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 07:44:53.545: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 07:45:19.755: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.102:8080/dial?request=hostName&protocol=udp&host=10.244.1.101&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wbhkz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 07:45:19.755: INFO: >>> kubeConfig: /root/.kube/config I0512 07:45:19.788698 6 log.go:172] (0xc0005d9080) (0xc000666780) Create stream I0512 07:45:19.788729 6 log.go:172] (0xc0005d9080) (0xc000666780) Stream added, broadcasting: 1 I0512 07:45:19.791131 6 log.go:172] (0xc0005d9080) Reply frame received for 1 I0512 07:45:19.791176 6 log.go:172] (0xc0005d9080) (0xc000f42000) Create stream I0512 07:45:19.791187 6 log.go:172] (0xc0005d9080) (0xc000f42000) Stream added, broadcasting: 3 I0512 07:45:19.792100 6 log.go:172] (0xc0005d9080) Reply frame received for 3 I0512 07:45:19.792132 6 log.go:172] (0xc0005d9080) (0xc000666b40) Create stream I0512 07:45:19.792144 6 log.go:172] (0xc0005d9080) (0xc000666b40) Stream added, broadcasting: 5 I0512 07:45:19.793032 6 log.go:172] (0xc0005d9080) Reply frame received for 5 I0512 07:45:19.864004 6 log.go:172] (0xc0005d9080) Data frame received for 3 I0512 07:45:19.864036 6 log.go:172] (0xc000f42000) (3) Data frame handling I0512 07:45:19.864052 6 log.go:172] (0xc000f42000) (3) Data frame sent I0512 07:45:19.864695 6 log.go:172] (0xc0005d9080) Data frame received for 3 I0512 07:45:19.864726 6 log.go:172] (0xc000f42000) (3) Data frame handling I0512 07:45:19.864747 6 log.go:172] (0xc0005d9080) Data frame received for 5 I0512 07:45:19.864754 6 log.go:172] (0xc000666b40) (5) Data frame handling I0512 07:45:19.865848 6 log.go:172] (0xc0005d9080) Data frame received for 1 I0512 07:45:19.865864 6 log.go:172] (0xc000666780) (1) Data frame handling I0512 07:45:19.865881 6 log.go:172] (0xc000666780) (1) Data frame sent I0512 07:45:19.865896 6 log.go:172] (0xc0005d9080) (0xc000666780) Stream removed, broadcasting: 1 I0512 07:45:19.865917 6 log.go:172] (0xc0005d9080) Go away received I0512 07:45:19.865992 6 log.go:172] (0xc0005d9080) (0xc000666780) Stream removed, broadcasting: 1 I0512 07:45:19.866007 6 log.go:172] (0xc0005d9080) (0xc000f42000) Stream removed, broadcasting: 3 I0512 07:45:19.866018 6 log.go:172] (0xc0005d9080) (0xc000666b40) Stream removed, broadcasting: 5 May 12 07:45:19.866: INFO: Waiting for endpoints: map[] May 12 07:45:19.868: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.102:8080/dial?request=hostName&protocol=udp&host=10.244.2.27&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wbhkz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 07:45:19.868: INFO: >>> kubeConfig: /root/.kube/config I0512 07:45:19.896202 6 log.go:172] (0xc0011744d0) (0xc001e350e0) Create stream I0512 07:45:19.896245 6 log.go:172] (0xc0011744d0) (0xc001e350e0) Stream added, broadcasting: 1 I0512 07:45:19.899271 6 log.go:172] (0xc0011744d0) Reply frame received for 1 I0512 07:45:19.899324 6 log.go:172] (0xc0011744d0) (0xc0001126e0) Create stream I0512 07:45:19.899346 6 log.go:172] (0xc0011744d0) (0xc0001126e0) Stream added, broadcasting: 3 I0512 07:45:19.900270 6 log.go:172] (0xc0011744d0) Reply frame received for 3 I0512 07:45:19.900330 6 log.go:172] (0xc0011744d0) (0xc000a72fa0) Create stream I0512 07:45:19.900386 6 log.go:172] (0xc0011744d0) (0xc000a72fa0) Stream added, broadcasting: 5 I0512 07:45:19.901968 6 log.go:172] (0xc0011744d0) Reply frame received for 5 I0512 07:45:19.972395 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 07:45:19.972430 6 log.go:172] (0xc0001126e0) (3) Data frame handling I0512 07:45:19.972448 6 log.go:172] (0xc0001126e0) (3) Data frame sent I0512 07:45:19.973805 6 log.go:172] (0xc0011744d0) Data frame received for 5 I0512 07:45:19.973833 6 log.go:172] (0xc000a72fa0) (5) Data frame handling I0512 07:45:19.974137 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 07:45:19.974158 6 log.go:172] (0xc0001126e0) (3) Data frame handling I0512 07:45:19.975666 6 log.go:172] (0xc0011744d0) Data frame received for 1 I0512 07:45:19.975714 6 log.go:172] (0xc001e350e0) (1) Data frame handling I0512 07:45:19.975739 6 log.go:172] (0xc001e350e0) (1) Data frame sent I0512 07:45:19.975760 6 log.go:172] (0xc0011744d0) (0xc001e350e0) Stream removed, broadcasting: 1 I0512 07:45:19.975868 6 log.go:172] (0xc0011744d0) (0xc001e350e0) Stream removed, broadcasting: 1 I0512 07:45:19.975885 6 log.go:172] (0xc0011744d0) (0xc0001126e0) Stream removed, broadcasting: 3 I0512 07:45:19.975909 6 log.go:172] (0xc0011744d0) (0xc000a72fa0) Stream removed, broadcasting: 5 May 12 07:45:19.975: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:45:19.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0512 07:45:19.976285 6 log.go:172] (0xc0011744d0) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-wbhkz" for this suite. May 12 07:45:43.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:45:44.055: INFO: namespace: e2e-tests-pod-network-test-wbhkz, resource: bindings, ignored listing per whitelist May 12 07:45:44.063: INFO: namespace e2e-tests-pod-network-test-wbhkz deletion completed in 24.082770778s • [SLOW TEST:50.673 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:45:44.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 07:45:44.151: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:45:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-69smf" for this suite. May 12 07:46:15.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:46:15.583: INFO: namespace: e2e-tests-init-container-69smf, resource: bindings, ignored listing per whitelist May 12 07:46:15.638: INFO: namespace e2e-tests-init-container-69smf deletion completed in 22.139253385s • [SLOW TEST:31.575 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:46:15.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:46:15.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-7d6w2" to be "success or failure" May 12 07:46:15.750: INFO: Pod "downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.423094ms May 12 07:46:17.754: INFO: Pod "downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014178488s May 12 07:46:19.758: INFO: Pod "downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018470992s STEP: Saw pod success May 12 07:46:19.758: INFO: Pod "downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:46:19.761: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:46:19.948: INFO: Waiting for pod downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c to disappear May 12 07:46:19.990: INFO: Pod downwardapi-volume-a8a337fb-9424-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:46:19.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7d6w2" for this suite. May 12 07:46:26.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:46:26.056: INFO: namespace: e2e-tests-downward-api-7d6w2, resource: bindings, ignored listing per whitelist May 12 07:46:26.112: INFO: namespace e2e-tests-downward-api-7d6w2 deletion completed in 6.118299759s • [SLOW TEST:10.474 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:46:26.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 07:46:26.196: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:46:27.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-qwx8h" for this suite. May 12 07:46:33.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:46:33.326: INFO: namespace: e2e-tests-custom-resource-definition-qwx8h, resource: bindings, ignored listing per whitelist May 12 07:46:33.383: INFO: namespace e2e-tests-custom-resource-definition-qwx8h deletion completed in 6.086601733s • [SLOW TEST:7.271 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:46:33.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 07:46:34.625: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 12 07:46:39.839: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 07:46:42.241: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 12 07:46:44.245: INFO: Creating deployment "test-rollover-deployment" May 12 07:46:44.311: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 12 07:46:46.323: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 12 07:46:46.386: INFO: Ensure that both replica sets have 1 created replica May 12 07:46:46.391: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 12 07:46:46.397: INFO: Updating deployment test-rollover-deployment May 12 07:46:46.398: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 12 07:46:48.824: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 12 07:46:49.686: INFO: Make sure deployment "test-rollover-deployment" is complete May 12 07:46:49.691: INFO: all replica sets need to contain the pod-template-hash label May 12 07:46:49.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:46:51.699: INFO: all replica sets need to contain the pod-template-hash label May 12 07:46:51.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:46:53.700: INFO: all replica sets need to contain the pod-template-hash label May 12 07:46:53.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:46:55.877: INFO: all replica sets need to contain the pod-template-hash label May 12 07:46:55.877: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866409, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:46:57.705: INFO: all replica sets need to contain the pod-template-hash label May 12 07:46:57.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866415, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:46:59.701: INFO: all replica sets need to contain the pod-template-hash label May 12 07:46:59.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866415, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:47:01.912: INFO: all replica sets need to contain the pod-template-hash label May 12 07:47:01.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866415, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:47:03.962: INFO: all replica sets need to contain the pod-template-hash label May 12 07:47:03.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866415, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:47:05.701: INFO: all replica sets need to contain the pod-template-hash label May 12 07:47:05.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866415, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724866404, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 07:47:07.700: INFO: May 12 07:47:07.700: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 07:47:07.708: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-gl6lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gl6lx/deployments/test-rollover-deployment,UID:b9a39f1b-9424-11ea-99e8-0242ac110002,ResourceVersion:10116460,Generation:2,CreationTimestamp:2020-05-12 07:46:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-12 07:46:44 +0000 UTC 2020-05-12 07:46:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-12 07:47:07 +0000 UTC 2020-05-12 07:46:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 12 07:47:07.711: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-gl6lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gl6lx/replicasets/test-rollover-deployment-5b8479fdb6,UID:baec36ef-9424-11ea-99e8-0242ac110002,ResourceVersion:10116450,Generation:2,CreationTimestamp:2020-05-12 07:46:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b9a39f1b-9424-11ea-99e8-0242ac110002 0xc001204177 0xc001204178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 12 07:47:07.711: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 12 07:47:07.711: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-gl6lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gl6lx/replicasets/test-rollover-controller,UID:b3a799c9-9424-11ea-99e8-0242ac110002,ResourceVersion:10116459,Generation:2,CreationTimestamp:2020-05-12 07:46:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b9a39f1b-9424-11ea-99e8-0242ac110002 0xc0018affe7 0xc0018affe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 07:47:07.711: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-gl6lx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gl6lx/replicasets/test-rollover-deployment-58494b7559,UID:b9aea536-9424-11ea-99e8-0242ac110002,ResourceVersion:10116407,Generation:2,CreationTimestamp:2020-05-12 07:46:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b9a39f1b-9424-11ea-99e8-0242ac110002 0xc0012040a7 0xc0012040a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 07:47:07.714: INFO: Pod "test-rollover-deployment-5b8479fdb6-nmcbc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-nmcbc,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-gl6lx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gl6lx/pods/test-rollover-deployment-5b8479fdb6-nmcbc,UID:bbcc5cfe-9424-11ea-99e8-0242ac110002,ResourceVersion:10116428,Generation:0,CreationTimestamp:2020-05-12 07:46:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 baec36ef-9424-11ea-99e8-0242ac110002 0xc0019ed3e7 0xc0019ed3e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-c6kkt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c6kkt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-c6kkt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019ed4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019ed500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:46:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:46:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:46:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:46:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.105,StartTime:2020-05-12 07:46:48 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-12 07:46:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://6fb4302d3beaa083e16dac260196b84462170fe0dfcf8575c25e6e502b96e725}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:47:07.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gl6lx" for this suite. May 12 07:47:17.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:47:17.836: INFO: namespace: e2e-tests-deployment-gl6lx, resource: bindings, ignored listing per whitelist May 12 07:47:17.903: INFO: namespace e2e-tests-deployment-gl6lx deletion completed in 10.142261758s • [SLOW TEST:44.520 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:47:17.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-cdc65be3-9424-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:47:18.047: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-65lqs" to be "success or failure" May 12 07:47:18.051: INFO: Pod "pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139762ms May 12 07:47:20.055: INFO: Pod "pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00811599s May 12 07:47:22.059: INFO: Pod "pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01210802s May 12 07:47:24.063: INFO: Pod "pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016281319s STEP: Saw pod success May 12 07:47:24.063: INFO: Pod "pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:47:24.066: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 07:47:24.083: INFO: Waiting for pod pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c to disappear May 12 07:47:24.113: INFO: Pod pod-configmaps-cdc80472-9424-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:47:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-65lqs" for this suite. May 12 07:47:30.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:47:30.183: INFO: namespace: e2e-tests-configmap-65lqs, resource: bindings, ignored listing per whitelist May 12 07:47:30.222: INFO: namespace e2e-tests-configmap-65lqs deletion completed in 6.105609359s • [SLOW TEST:12.319 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:47:30.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 12 07:47:30.376: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-k592v" to be "success or failure" May 12 07:47:30.380: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.61556ms May 12 07:47:32.384: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007596405s May 12 07:47:34.387: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011009597s May 12 07:47:36.391: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014660126s May 12 07:47:38.394: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018031443s STEP: Saw pod success May 12 07:47:38.394: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 12 07:47:38.397: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 12 07:47:38.424: INFO: Waiting for pod pod-host-path-test to disappear May 12 07:47:38.521: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:47:38.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-k592v" for this suite. May 12 07:47:44.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:47:44.623: INFO: namespace: e2e-tests-hostpath-k592v, resource: bindings, ignored listing per whitelist May 12 07:47:44.654: INFO: namespace e2e-tests-hostpath-k592v deletion completed in 6.129333719s • [SLOW TEST:14.432 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:47:44.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-ddc1940c-9424-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 07:47:44.855: INFO: Waiting up to 5m0s for pod "pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-s7cm9" to be "success or failure" May 12 07:47:44.860: INFO: Pod "pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.521777ms May 12 07:47:46.863: INFO: Pod "pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007909614s May 12 07:47:48.867: INFO: Pod "pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01231198s May 12 07:47:50.871: INFO: Pod "pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016151173s STEP: Saw pod success May 12 07:47:50.871: INFO: Pod "pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:47:50.875: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 07:47:50.911: INFO: Waiting for pod pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c to disappear May 12 07:47:51.240: INFO: Pod pod-secrets-ddc2c3c1-9424-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:47:51.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-s7cm9" for this suite. May 12 07:48:05.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:48:06.410: INFO: namespace: e2e-tests-secrets-s7cm9, resource: bindings, ignored listing per whitelist May 12 07:48:06.428: INFO: namespace e2e-tests-secrets-s7cm9 deletion completed in 15.1840601s • [SLOW TEST:21.774 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:48:06.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-nvfrn STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-nvfrn STEP: Deleting pre-stop pod May 12 07:48:24.826: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:48:24.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-nvfrn" for this suite. May 12 07:49:09.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:49:09.623: INFO: namespace: e2e-tests-prestop-nvfrn, resource: bindings, ignored listing per whitelist May 12 07:49:09.673: INFO: namespace e2e-tests-prestop-nvfrn deletion completed in 44.509010562s • [SLOW TEST:63.245 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:49:09.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:49:11.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-xnk6w" to be "success or failure" May 12 07:49:12.026: INFO: Pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 338.857717ms May 12 07:49:14.030: INFO: Pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.343346899s May 12 07:49:16.034: INFO: Pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347240934s May 12 07:49:18.187: INFO: Pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.500405243s May 12 07:49:20.190: INFO: Pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.503150246s STEP: Saw pod success May 12 07:49:20.190: INFO: Pod "downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:49:20.193: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:49:20.237: INFO: Waiting for pod downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:49:20.246: INFO: Pod downwardapi-volume-1121a5de-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:49:20.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xnk6w" for this suite. May 12 07:49:28.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:49:28.329: INFO: namespace: e2e-tests-downward-api-xnk6w, resource: bindings, ignored listing per whitelist May 12 07:49:28.350: INFO: namespace e2e-tests-downward-api-xnk6w deletion completed in 8.101436292s • [SLOW TEST:18.677 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:49:28.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-nx6f STEP: Creating a pod to test atomic-volume-subpath May 12 07:49:28.492: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nx6f" in namespace "e2e-tests-subpath-c2hh4" to be "success or failure" May 12 07:49:28.522: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.33934ms May 12 07:49:30.525: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032349417s May 12 07:49:35.662: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.169383357s May 12 07:49:37.665: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.172746679s May 12 07:49:39.743: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.250860891s May 12 07:49:41.746: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 13.254027687s May 12 07:49:43.750: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 15.257606283s May 12 07:49:45.754: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 17.26207169s May 12 07:49:47.759: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 19.266216792s May 12 07:49:50.697: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 22.204524955s May 12 07:49:52.701: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 24.209050955s May 12 07:49:54.706: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 26.21367118s May 12 07:49:56.710: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 28.217699708s May 12 07:49:58.714: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Running", Reason="", readiness=false. Elapsed: 30.221552685s May 12 07:50:00.718: INFO: Pod "pod-subpath-test-configmap-nx6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.225981851s STEP: Saw pod success May 12 07:50:00.718: INFO: Pod "pod-subpath-test-configmap-nx6f" satisfied condition "success or failure" May 12 07:50:00.721: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-nx6f container test-container-subpath-configmap-nx6f: STEP: delete the pod May 12 07:50:00.943: INFO: Waiting for pod pod-subpath-test-configmap-nx6f to disappear May 12 07:50:01.068: INFO: Pod pod-subpath-test-configmap-nx6f no longer exists STEP: Deleting pod pod-subpath-test-configmap-nx6f May 12 07:50:01.068: INFO: Deleting pod "pod-subpath-test-configmap-nx6f" in namespace "e2e-tests-subpath-c2hh4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:50:01.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-c2hh4" for this suite. May 12 07:50:09.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:50:09.369: INFO: namespace: e2e-tests-subpath-c2hh4, resource: bindings, ignored listing per whitelist May 12 07:50:09.407: INFO: namespace e2e-tests-subpath-c2hh4 deletion completed in 8.332990875s • [SLOW TEST:41.056 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:50:09.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-340c7726-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:50:09.636: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-wgdrm" to be "success or failure" May 12 07:50:09.835: INFO: Pod "pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 198.822583ms May 12 07:50:11.839: INFO: Pod "pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202473242s May 12 07:50:13.843: INFO: Pod "pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.206765649s May 12 07:50:15.847: INFO: Pod "pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210475284s STEP: Saw pod success May 12 07:50:15.847: INFO: Pod "pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:50:15.849: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 07:50:15.876: INFO: Waiting for pod pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:50:16.081: INFO: Pod pod-projected-configmaps-340cf3ae-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:50:16.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wgdrm" for this suite. May 12 07:50:24.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:50:24.208: INFO: namespace: e2e-tests-projected-wgdrm, resource: bindings, ignored listing per whitelist May 12 07:50:24.213: INFO: namespace e2e-tests-projected-wgdrm deletion completed in 8.127798675s • [SLOW TEST:14.806 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:50:24.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3cd0589e-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 07:50:24.361: INFO: Waiting up to 5m0s for pod "pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-qqrj5" to be "success or failure" May 12 07:50:24.405: INFO: Pod "pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.900814ms May 12 07:50:26.408: INFO: Pod "pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04647756s May 12 07:50:28.412: INFO: Pod "pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050299998s May 12 07:50:30.416: INFO: Pod "pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054561075s STEP: Saw pod success May 12 07:50:30.416: INFO: Pod "pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:50:30.419: INFO: Trying to get logs from node hunter-worker pod pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 07:50:30.549: INFO: Waiting for pod pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:50:30.595: INFO: Pod pod-secrets-3cd5b4c5-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:50:30.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qqrj5" for this suite. May 12 07:50:37.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:50:37.777: INFO: namespace: e2e-tests-secrets-qqrj5, resource: bindings, ignored listing per whitelist May 12 07:50:37.788: INFO: namespace e2e-tests-secrets-qqrj5 deletion completed in 7.190800918s • [SLOW TEST:13.575 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:50:37.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 07:50:38.577: INFO: Waiting up to 5m0s for pod "pod-454d5159-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-w9c25" to be "success or failure" May 12 07:50:38.600: INFO: Pod "pod-454d5159-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.030292ms May 12 07:50:40.603: INFO: Pod "pod-454d5159-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02523429s May 12 07:50:42.607: INFO: Pod "pod-454d5159-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029211527s May 12 07:50:44.895: INFO: Pod "pod-454d5159-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.317788384s STEP: Saw pod success May 12 07:50:44.895: INFO: Pod "pod-454d5159-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:50:44.898: INFO: Trying to get logs from node hunter-worker2 pod pod-454d5159-9425-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:50:44.924: INFO: Waiting for pod pod-454d5159-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:50:44.992: INFO: Pod pod-454d5159-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:50:44.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-w9c25" for this suite. May 12 07:50:53.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:50:53.106: INFO: namespace: e2e-tests-emptydir-w9c25, resource: bindings, ignored listing per whitelist May 12 07:50:53.113: INFO: namespace e2e-tests-emptydir-w9c25 deletion completed in 8.085980989s • [SLOW TEST:15.325 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:50:53.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-4e0b81cb-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:50:53.263: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-2vdlx" to be "success or failure" May 12 07:50:53.293: INFO: Pod "pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.15268ms May 12 07:50:55.297: INFO: Pod "pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034641165s May 12 07:50:57.306: INFO: Pod "pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043555445s STEP: Saw pod success May 12 07:50:57.306: INFO: Pod "pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:50:57.309: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 07:50:57.838: INFO: Waiting for pod pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:50:57.871: INFO: Pod pod-configmaps-4e0c4def-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:50:57.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2vdlx" for this suite. May 12 07:51:04.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:51:04.099: INFO: namespace: e2e-tests-configmap-2vdlx, resource: bindings, ignored listing per whitelist May 12 07:51:04.106: INFO: namespace e2e-tests-configmap-2vdlx deletion completed in 6.196794508s • [SLOW TEST:10.993 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:51:04.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-cx2fk I0512 07:51:04.225926 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-cx2fk, replica count: 1 I0512 07:51:05.276318 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:51:06.276507 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:51:07.276673 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:51:08.276869 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:51:09.277079 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 07:51:10.277507 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 07:51:10.703: INFO: Created: latency-svc-r9gv7 May 12 07:51:11.042: INFO: Got endpoints: latency-svc-r9gv7 [664.79187ms] May 12 07:51:11.122: INFO: Created: latency-svc-fp58n May 12 07:51:11.218: INFO: Got endpoints: latency-svc-fp58n [176.219053ms] May 12 07:51:11.303: INFO: Created: latency-svc-pghqx May 12 07:51:11.375: INFO: Got endpoints: latency-svc-pghqx [332.896839ms] May 12 07:51:11.429: INFO: Created: latency-svc-n5kl2 May 12 07:51:11.596: INFO: Got endpoints: latency-svc-n5kl2 [553.389835ms] May 12 07:51:11.650: INFO: Created: latency-svc-9jnpm May 12 07:51:11.771: INFO: Got endpoints: latency-svc-9jnpm [728.236287ms] May 12 07:51:11.815: INFO: Created: latency-svc-2bw9w May 12 07:51:11.826: INFO: Got endpoints: latency-svc-2bw9w [783.836693ms] May 12 07:51:11.848: INFO: Created: latency-svc-mbslw May 12 07:51:11.863: INFO: Got endpoints: latency-svc-mbslw [820.314536ms] May 12 07:51:11.919: INFO: Created: latency-svc-5h7kj May 12 07:51:11.935: INFO: Got endpoints: latency-svc-5h7kj [892.668006ms] May 12 07:51:11.974: INFO: Created: latency-svc-m5djr May 12 07:51:12.019: INFO: Got endpoints: latency-svc-m5djr [976.233211ms] May 12 07:51:12.106: INFO: Created: latency-svc-qtj6f May 12 07:51:12.121: INFO: Got endpoints: latency-svc-qtj6f [1.078222026s] May 12 07:51:12.155: INFO: Created: latency-svc-qlb9s May 12 07:51:12.182: INFO: Got endpoints: latency-svc-qlb9s [1.139403089s] May 12 07:51:12.303: INFO: Created: latency-svc-fmz4g May 12 07:51:12.307: INFO: Got endpoints: latency-svc-fmz4g [1.264453288s] May 12 07:51:12.353: INFO: Created: latency-svc-6q875 May 12 07:51:12.367: INFO: Got endpoints: latency-svc-6q875 [1.324527361s] May 12 07:51:12.459: INFO: Created: latency-svc-p4sx2 May 12 07:51:12.479: INFO: Got endpoints: latency-svc-p4sx2 [1.436035742s] May 12 07:51:12.504: INFO: Created: latency-svc-5858r May 12 07:51:12.522: INFO: Got endpoints: latency-svc-5858r [1.479936873s] May 12 07:51:12.547: INFO: Created: latency-svc-849tx May 12 07:51:12.622: INFO: Got endpoints: latency-svc-849tx [1.579415925s] May 12 07:51:12.666: INFO: Created: latency-svc-6nh9z May 12 07:51:12.775: INFO: Got endpoints: latency-svc-6nh9z [1.556158338s] May 12 07:51:12.847: INFO: Created: latency-svc-vxktj May 12 07:51:12.935: INFO: Got endpoints: latency-svc-vxktj [1.560346524s] May 12 07:51:12.978: INFO: Created: latency-svc-hlpms May 12 07:51:13.069: INFO: Got endpoints: latency-svc-hlpms [1.473104757s] May 12 07:51:13.074: INFO: Created: latency-svc-dhggs May 12 07:51:13.092: INFO: Got endpoints: latency-svc-dhggs [1.321712047s] May 12 07:51:13.135: INFO: Created: latency-svc-7275d May 12 07:51:13.159: INFO: Got endpoints: latency-svc-7275d [1.333175107s] May 12 07:51:13.214: INFO: Created: latency-svc-426vf May 12 07:51:13.226: INFO: Got endpoints: latency-svc-426vf [1.362883672s] May 12 07:51:13.242: INFO: Created: latency-svc-sx5tz May 12 07:51:13.256: INFO: Got endpoints: latency-svc-sx5tz [1.320471758s] May 12 07:51:13.297: INFO: Created: latency-svc-7nflp May 12 07:51:13.310: INFO: Got endpoints: latency-svc-7nflp [1.291182029s] May 12 07:51:13.357: INFO: Created: latency-svc-clx6q May 12 07:51:13.360: INFO: Got endpoints: latency-svc-clx6q [1.238657674s] May 12 07:51:13.386: INFO: Created: latency-svc-xw7ns May 12 07:51:13.400: INFO: Got endpoints: latency-svc-xw7ns [1.218268082s] May 12 07:51:13.422: INFO: Created: latency-svc-vm9j6 May 12 07:51:13.437: INFO: Got endpoints: latency-svc-vm9j6 [1.129892086s] May 12 07:51:13.585: INFO: Created: latency-svc-5qxnz May 12 07:51:13.587: INFO: Got endpoints: latency-svc-5qxnz [1.219997892s] May 12 07:51:13.656: INFO: Created: latency-svc-s79qx May 12 07:51:13.683: INFO: Got endpoints: latency-svc-s79qx [1.204581389s] May 12 07:51:13.758: INFO: Created: latency-svc-tng89 May 12 07:51:13.786: INFO: Got endpoints: latency-svc-tng89 [1.26345538s] May 12 07:51:14.027: INFO: Created: latency-svc-w4lbc May 12 07:51:14.069: INFO: Got endpoints: latency-svc-w4lbc [1.446839627s] May 12 07:51:14.113: INFO: Created: latency-svc-79h6w May 12 07:51:14.195: INFO: Got endpoints: latency-svc-79h6w [1.420061186s] May 12 07:51:14.198: INFO: Created: latency-svc-lwdn4 May 12 07:51:14.242: INFO: Got endpoints: latency-svc-lwdn4 [1.306370282s] May 12 07:51:14.293: INFO: Created: latency-svc-q6q5j May 12 07:51:14.387: INFO: Got endpoints: latency-svc-q6q5j [1.317614993s] May 12 07:51:14.419: INFO: Created: latency-svc-nps74 May 12 07:51:14.458: INFO: Got endpoints: latency-svc-nps74 [1.365983448s] May 12 07:51:14.609: INFO: Created: latency-svc-4lj8k May 12 07:51:14.632: INFO: Got endpoints: latency-svc-4lj8k [1.472583057s] May 12 07:51:14.690: INFO: Created: latency-svc-kdc5f May 12 07:51:14.818: INFO: Got endpoints: latency-svc-kdc5f [1.592058421s] May 12 07:51:14.892: INFO: Created: latency-svc-2hprh May 12 07:51:15.021: INFO: Got endpoints: latency-svc-2hprh [1.765512404s] May 12 07:51:15.095: INFO: Created: latency-svc-snwl4 May 12 07:51:15.207: INFO: Got endpoints: latency-svc-snwl4 [1.897078541s] May 12 07:51:15.418: INFO: Created: latency-svc-6pcdm May 12 07:51:15.421: INFO: Got endpoints: latency-svc-6pcdm [2.061757733s] May 12 07:51:15.514: INFO: Created: latency-svc-jwkjc May 12 07:51:15.596: INFO: Got endpoints: latency-svc-jwkjc [2.195553382s] May 12 07:51:15.598: INFO: Created: latency-svc-f2rhb May 12 07:51:15.618: INFO: Got endpoints: latency-svc-f2rhb [2.181166266s] May 12 07:51:15.831: INFO: Created: latency-svc-grnkn May 12 07:51:15.835: INFO: Got endpoints: latency-svc-grnkn [2.247466706s] May 12 07:51:15.986: INFO: Created: latency-svc-f2m6q May 12 07:51:16.032: INFO: Got endpoints: latency-svc-f2m6q [2.348979056s] May 12 07:51:16.178: INFO: Created: latency-svc-5g2dn May 12 07:51:16.189: INFO: Got endpoints: latency-svc-5g2dn [2.402592621s] May 12 07:51:17.041: INFO: Created: latency-svc-r79rp May 12 07:51:17.505: INFO: Got endpoints: latency-svc-r79rp [3.436943653s] May 12 07:51:17.969: INFO: Created: latency-svc-tskjn May 12 07:51:18.153: INFO: Got endpoints: latency-svc-tskjn [3.958489067s] May 12 07:51:18.735: INFO: Created: latency-svc-6kgts May 12 07:51:19.208: INFO: Got endpoints: latency-svc-6kgts [4.965721003s] May 12 07:51:19.400: INFO: Created: latency-svc-6226p May 12 07:51:19.467: INFO: Got endpoints: latency-svc-6226p [5.080034897s] May 12 07:51:19.626: INFO: Created: latency-svc-nmqtv May 12 07:51:19.668: INFO: Got endpoints: latency-svc-nmqtv [5.209360359s] May 12 07:51:19.896: INFO: Created: latency-svc-d9z6g May 12 07:51:19.956: INFO: Got endpoints: latency-svc-d9z6g [5.324091388s] May 12 07:51:20.234: INFO: Created: latency-svc-49m8k May 12 07:51:20.605: INFO: Got endpoints: latency-svc-49m8k [5.787157146s] May 12 07:51:20.849: INFO: Created: latency-svc-bk84w May 12 07:51:20.861: INFO: Got endpoints: latency-svc-bk84w [5.839713232s] May 12 07:51:20.940: INFO: Created: latency-svc-m7m8w May 12 07:51:21.021: INFO: Got endpoints: latency-svc-m7m8w [5.814062565s] May 12 07:51:21.055: INFO: Created: latency-svc-r9vm5 May 12 07:51:21.442: INFO: Created: latency-svc-4hw9n May 12 07:51:21.506: INFO: Created: latency-svc-ln2zr May 12 07:51:21.507: INFO: Got endpoints: latency-svc-r9vm5 [6.085057868s] May 12 07:51:21.535: INFO: Got endpoints: latency-svc-ln2zr [5.916221817s] May 12 07:51:21.595: INFO: Got endpoints: latency-svc-4hw9n [5.999390911s] May 12 07:51:21.596: INFO: Created: latency-svc-5hpqb May 12 07:51:21.728: INFO: Got endpoints: latency-svc-5hpqb [5.893305429s] May 12 07:51:21.732: INFO: Created: latency-svc-qg8gz May 12 07:51:21.737: INFO: Got endpoints: latency-svc-qg8gz [5.704630008s] May 12 07:51:21.782: INFO: Created: latency-svc-mw6tj May 12 07:51:21.798: INFO: Got endpoints: latency-svc-mw6tj [5.609513064s] May 12 07:51:21.896: INFO: Created: latency-svc-bvcbx May 12 07:51:21.936: INFO: Got endpoints: latency-svc-bvcbx [4.430494562s] May 12 07:51:21.992: INFO: Created: latency-svc-l7xfz May 12 07:51:22.070: INFO: Got endpoints: latency-svc-l7xfz [3.917015796s] May 12 07:51:22.112: INFO: Created: latency-svc-s9kvb May 12 07:51:22.122: INFO: Got endpoints: latency-svc-s9kvb [2.914166931s] May 12 07:51:22.144: INFO: Created: latency-svc-ch27b May 12 07:51:22.153: INFO: Got endpoints: latency-svc-ch27b [2.685344032s] May 12 07:51:22.238: INFO: Created: latency-svc-rsjg6 May 12 07:51:22.249: INFO: Got endpoints: latency-svc-rsjg6 [2.580859544s] May 12 07:51:22.503: INFO: Created: latency-svc-w6xj7 May 12 07:51:22.519: INFO: Got endpoints: latency-svc-w6xj7 [2.562947408s] May 12 07:51:23.110: INFO: Created: latency-svc-xjxv6 May 12 07:51:23.483: INFO: Got endpoints: latency-svc-xjxv6 [2.877942044s] May 12 07:51:23.627: INFO: Created: latency-svc-qrrz4 May 12 07:51:23.639: INFO: Got endpoints: latency-svc-qrrz4 [2.777987602s] May 12 07:51:23.702: INFO: Created: latency-svc-4rt52 May 12 07:51:24.184: INFO: Got endpoints: latency-svc-4rt52 [3.162130156s] May 12 07:51:24.192: INFO: Created: latency-svc-294jn May 12 07:51:24.215: INFO: Got endpoints: latency-svc-294jn [2.708278277s] May 12 07:51:24.556: INFO: Created: latency-svc-5wg8r May 12 07:51:24.772: INFO: Got endpoints: latency-svc-5wg8r [3.23708701s] May 12 07:51:25.202: INFO: Created: latency-svc-cgs2c May 12 07:51:25.304: INFO: Got endpoints: latency-svc-cgs2c [3.709008397s] May 12 07:51:25.435: INFO: Created: latency-svc-6lvxt May 12 07:51:25.499: INFO: Got endpoints: latency-svc-6lvxt [3.770217746s] May 12 07:51:25.499: INFO: Created: latency-svc-rpsns May 12 07:51:25.734: INFO: Got endpoints: latency-svc-rpsns [3.997078228s] May 12 07:51:25.805: INFO: Created: latency-svc-btjht May 12 07:51:26.076: INFO: Got endpoints: latency-svc-btjht [4.277424993s] May 12 07:51:26.260: INFO: Created: latency-svc-88h6z May 12 07:51:26.260: INFO: Got endpoints: latency-svc-88h6z [4.323913976s] May 12 07:51:26.393: INFO: Created: latency-svc-kxg5g May 12 07:51:26.397: INFO: Got endpoints: latency-svc-kxg5g [4.326937286s] May 12 07:51:26.454: INFO: Created: latency-svc-bxk64 May 12 07:51:26.457: INFO: Got endpoints: latency-svc-bxk64 [4.335368403s] May 12 07:51:26.585: INFO: Created: latency-svc-zx5h9 May 12 07:51:26.646: INFO: Got endpoints: latency-svc-zx5h9 [4.492986279s] May 12 07:51:26.646: INFO: Created: latency-svc-pnnmn May 12 07:51:26.681: INFO: Got endpoints: latency-svc-pnnmn [4.431763981s] May 12 07:51:26.741: INFO: Created: latency-svc-rb42g May 12 07:51:26.746: INFO: Got endpoints: latency-svc-rb42g [4.226633407s] May 12 07:51:26.784: INFO: Created: latency-svc-xr4q2 May 12 07:51:26.794: INFO: Got endpoints: latency-svc-xr4q2 [3.311263653s] May 12 07:51:26.814: INFO: Created: latency-svc-wj4cf May 12 07:51:26.875: INFO: Got endpoints: latency-svc-wj4cf [3.235328702s] May 12 07:51:26.909: INFO: Created: latency-svc-jmffw May 12 07:51:26.915: INFO: Got endpoints: latency-svc-jmffw [2.731198516s] May 12 07:51:26.934: INFO: Created: latency-svc-7xbf8 May 12 07:51:26.939: INFO: Got endpoints: latency-svc-7xbf8 [2.724099689s] May 12 07:51:26.958: INFO: Created: latency-svc-hrbqb May 12 07:51:27.003: INFO: Got endpoints: latency-svc-hrbqb [2.231721934s] May 12 07:51:27.030: INFO: Created: latency-svc-mzjqt May 12 07:51:27.066: INFO: Got endpoints: latency-svc-mzjqt [1.761588052s] May 12 07:51:27.083: INFO: Created: latency-svc-7bvql May 12 07:51:27.154: INFO: Got endpoints: latency-svc-7bvql [1.655185646s] May 12 07:51:27.174: INFO: Created: latency-svc-pcphm May 12 07:51:27.187: INFO: Got endpoints: latency-svc-pcphm [1.452723113s] May 12 07:51:27.204: INFO: Created: latency-svc-tcfg5 May 12 07:51:27.218: INFO: Got endpoints: latency-svc-tcfg5 [1.141602106s] May 12 07:51:27.233: INFO: Created: latency-svc-6qssp May 12 07:51:27.248: INFO: Got endpoints: latency-svc-6qssp [987.487749ms] May 12 07:51:27.293: INFO: Created: latency-svc-fj2jq May 12 07:51:27.294: INFO: Got endpoints: latency-svc-fj2jq [896.561896ms] May 12 07:51:27.317: INFO: Created: latency-svc-m5t6w May 12 07:51:27.326: INFO: Got endpoints: latency-svc-m5t6w [868.939939ms] May 12 07:51:27.354: INFO: Created: latency-svc-ptlrk May 12 07:51:27.382: INFO: Got endpoints: latency-svc-ptlrk [735.612589ms] May 12 07:51:27.471: INFO: Created: latency-svc-phx77 May 12 07:51:27.473: INFO: Got endpoints: latency-svc-phx77 [792.376215ms] May 12 07:51:27.509: INFO: Created: latency-svc-bzz2j May 12 07:51:27.520: INFO: Got endpoints: latency-svc-bzz2j [774.126824ms] May 12 07:51:27.563: INFO: Created: latency-svc-tfcnb May 12 07:51:27.614: INFO: Got endpoints: latency-svc-tfcnb [819.813265ms] May 12 07:51:27.648: INFO: Created: latency-svc-s2ggl May 12 07:51:27.658: INFO: Got endpoints: latency-svc-s2ggl [783.172643ms] May 12 07:51:27.684: INFO: Created: latency-svc-8t68v May 12 07:51:27.694: INFO: Got endpoints: latency-svc-8t68v [779.423161ms] May 12 07:51:27.714: INFO: Created: latency-svc-kk282 May 12 07:51:27.758: INFO: Got endpoints: latency-svc-kk282 [818.666173ms] May 12 07:51:27.761: INFO: Created: latency-svc-zhnzr May 12 07:51:27.784: INFO: Got endpoints: latency-svc-zhnzr [780.781032ms] May 12 07:51:27.858: INFO: Created: latency-svc-hcxvp May 12 07:51:27.974: INFO: Got endpoints: latency-svc-hcxvp [907.470529ms] May 12 07:51:28.001: INFO: Created: latency-svc-xlrkk May 12 07:51:28.038: INFO: Got endpoints: latency-svc-xlrkk [883.877381ms] May 12 07:51:28.170: INFO: Created: latency-svc-jdh92 May 12 07:51:28.175: INFO: Got endpoints: latency-svc-jdh92 [987.998637ms] May 12 07:51:28.211: INFO: Created: latency-svc-j42qh May 12 07:51:28.218: INFO: Got endpoints: latency-svc-j42qh [1.0004435s] May 12 07:51:28.315: INFO: Created: latency-svc-xc25k May 12 07:51:28.325: INFO: Got endpoints: latency-svc-xc25k [1.077205399s] May 12 07:51:28.356: INFO: Created: latency-svc-fqjrq May 12 07:51:28.380: INFO: Got endpoints: latency-svc-fqjrq [1.086420267s] May 12 07:51:28.399: INFO: Created: latency-svc-sw2z6 May 12 07:51:28.410: INFO: Got endpoints: latency-svc-sw2z6 [1.083476767s] May 12 07:51:28.466: INFO: Created: latency-svc-f6hbr May 12 07:51:28.487: INFO: Got endpoints: latency-svc-f6hbr [1.105736574s] May 12 07:51:28.524: INFO: Created: latency-svc-d2pkz May 12 07:51:28.538: INFO: Got endpoints: latency-svc-d2pkz [1.064337431s] May 12 07:51:28.602: INFO: Created: latency-svc-m42jw May 12 07:51:28.605: INFO: Got endpoints: latency-svc-m42jw [1.085047002s] May 12 07:51:28.632: INFO: Created: latency-svc-nn6tg May 12 07:51:28.645: INFO: Got endpoints: latency-svc-nn6tg [1.03111627s] May 12 07:51:28.674: INFO: Created: latency-svc-966tj May 12 07:51:28.688: INFO: Got endpoints: latency-svc-966tj [1.029602813s] May 12 07:51:29.070: INFO: Created: latency-svc-jzp7q May 12 07:51:29.172: INFO: Got endpoints: latency-svc-jzp7q [1.477654498s] May 12 07:51:29.227: INFO: Created: latency-svc-spqx2 May 12 07:51:29.251: INFO: Got endpoints: latency-svc-spqx2 [1.493547047s] May 12 07:51:29.417: INFO: Created: latency-svc-f4btp May 12 07:51:29.419: INFO: Got endpoints: latency-svc-f4btp [1.635116786s] May 12 07:51:29.870: INFO: Created: latency-svc-99gm2 May 12 07:51:30.238: INFO: Got endpoints: latency-svc-99gm2 [2.264097266s] May 12 07:51:30.245: INFO: Created: latency-svc-lw8fg May 12 07:51:30.302: INFO: Got endpoints: latency-svc-lw8fg [2.264608536s] May 12 07:51:30.507: INFO: Created: latency-svc-hdn44 May 12 07:51:30.557: INFO: Got endpoints: latency-svc-hdn44 [2.381784065s] May 12 07:51:30.924: INFO: Created: latency-svc-mkcc7 May 12 07:51:30.978: INFO: Got endpoints: latency-svc-mkcc7 [2.76025951s] May 12 07:51:31.033: INFO: Created: latency-svc-4x5sp May 12 07:51:31.051: INFO: Got endpoints: latency-svc-4x5sp [2.726312188s] May 12 07:51:31.314: INFO: Created: latency-svc-szg26 May 12 07:51:31.429: INFO: Got endpoints: latency-svc-szg26 [3.048274617s] May 12 07:51:31.434: INFO: Created: latency-svc-r2s5j May 12 07:51:31.458: INFO: Got endpoints: latency-svc-r2s5j [3.048027433s] May 12 07:51:31.529: INFO: Created: latency-svc-czvzn May 12 07:51:31.679: INFO: Got endpoints: latency-svc-czvzn [3.191920406s] May 12 07:51:31.682: INFO: Created: latency-svc-gpvrg May 12 07:51:31.747: INFO: Got endpoints: latency-svc-gpvrg [3.209045227s] May 12 07:51:31.783: INFO: Created: latency-svc-8nrr8 May 12 07:51:31.795: INFO: Got endpoints: latency-svc-8nrr8 [3.189634033s] May 12 07:51:31.902: INFO: Created: latency-svc-nmqh8 May 12 07:51:31.906: INFO: Got endpoints: latency-svc-nmqh8 [3.260226696s] May 12 07:51:31.968: INFO: Created: latency-svc-b5k5p May 12 07:51:32.075: INFO: Got endpoints: latency-svc-b5k5p [3.387889188s] May 12 07:51:32.077: INFO: Created: latency-svc-jn6zn May 12 07:51:32.083: INFO: Got endpoints: latency-svc-jn6zn [2.911003345s] May 12 07:51:32.167: INFO: Created: latency-svc-rr2w8 May 12 07:51:32.257: INFO: Got endpoints: latency-svc-rr2w8 [3.005708094s] May 12 07:51:32.292: INFO: Created: latency-svc-mbx57 May 12 07:51:32.324: INFO: Got endpoints: latency-svc-mbx57 [2.904044549s] May 12 07:51:32.501: INFO: Created: latency-svc-whxjj May 12 07:51:32.598: INFO: Got endpoints: latency-svc-whxjj [2.359635778s] May 12 07:51:32.699: INFO: Created: latency-svc-k5zgb May 12 07:51:32.732: INFO: Got endpoints: latency-svc-k5zgb [2.429071671s] May 12 07:51:32.766: INFO: Created: latency-svc-7kh5d May 12 07:51:32.896: INFO: Got endpoints: latency-svc-7kh5d [2.338769891s] May 12 07:51:32.939: INFO: Created: latency-svc-j586f May 12 07:51:32.942: INFO: Got endpoints: latency-svc-j586f [1.964021659s] May 12 07:51:33.226: INFO: Created: latency-svc-58bvh May 12 07:51:33.228: INFO: Got endpoints: latency-svc-58bvh [2.176959911s] May 12 07:51:33.799: INFO: Created: latency-svc-4bzqh May 12 07:51:33.865: INFO: Got endpoints: latency-svc-4bzqh [2.436424888s] May 12 07:51:34.063: INFO: Created: latency-svc-4vpqx May 12 07:51:34.093: INFO: Got endpoints: latency-svc-4vpqx [2.635281006s] May 12 07:51:34.761: INFO: Created: latency-svc-5xgm2 May 12 07:51:35.549: INFO: Created: latency-svc-bs98q May 12 07:51:35.550: INFO: Got endpoints: latency-svc-5xgm2 [3.870409011s] May 12 07:51:35.675: INFO: Got endpoints: latency-svc-bs98q [3.928539626s] May 12 07:51:35.903: INFO: Created: latency-svc-qwsfs May 12 07:51:35.945: INFO: Got endpoints: latency-svc-qwsfs [4.150069233s] May 12 07:51:36.808: INFO: Created: latency-svc-9pvpd May 12 07:51:37.357: INFO: Got endpoints: latency-svc-9pvpd [5.451442317s] May 12 07:51:37.449: INFO: Created: latency-svc-zcbhp May 12 07:51:37.971: INFO: Got endpoints: latency-svc-zcbhp [5.895872856s] May 12 07:51:38.563: INFO: Created: latency-svc-vrg4p May 12 07:51:38.818: INFO: Got endpoints: latency-svc-vrg4p [6.735240906s] May 12 07:51:40.026: INFO: Created: latency-svc-ddtbw May 12 07:51:40.796: INFO: Got endpoints: latency-svc-ddtbw [8.538487074s] May 12 07:51:41.597: INFO: Created: latency-svc-gwhsr May 12 07:51:42.101: INFO: Got endpoints: latency-svc-gwhsr [9.777517059s] May 12 07:51:43.280: INFO: Created: latency-svc-jw2vv May 12 07:51:43.996: INFO: Created: latency-svc-pwdcd May 12 07:51:44.536: INFO: Got endpoints: latency-svc-pwdcd [11.804567785s] May 12 07:51:44.537: INFO: Got endpoints: latency-svc-jw2vv [11.939578405s] May 12 07:51:45.376: INFO: Created: latency-svc-hc5jl May 12 07:51:45.464: INFO: Got endpoints: latency-svc-hc5jl [12.567919795s] May 12 07:51:46.019: INFO: Created: latency-svc-x9pgv May 12 07:51:46.262: INFO: Got endpoints: latency-svc-x9pgv [13.319646466s] May 12 07:51:46.547: INFO: Created: latency-svc-fjh2t May 12 07:51:46.812: INFO: Got endpoints: latency-svc-fjh2t [13.584182968s] May 12 07:51:46.816: INFO: Created: latency-svc-cbd6g May 12 07:51:47.154: INFO: Got endpoints: latency-svc-cbd6g [13.28869244s] May 12 07:51:47.529: INFO: Created: latency-svc-cd4kx May 12 07:51:47.986: INFO: Got endpoints: latency-svc-cd4kx [13.8924645s] May 12 07:51:48.343: INFO: Created: latency-svc-sxtpl May 12 07:51:48.860: INFO: Got endpoints: latency-svc-sxtpl [13.310487309s] May 12 07:51:49.226: INFO: Created: latency-svc-l8d86 May 12 07:51:49.238: INFO: Got endpoints: latency-svc-l8d86 [13.562779536s] May 12 07:51:49.466: INFO: Created: latency-svc-2kjrv May 12 07:51:49.501: INFO: Got endpoints: latency-svc-2kjrv [13.55606473s] May 12 07:51:49.562: INFO: Created: latency-svc-4w2h4 May 12 07:51:49.574: INFO: Got endpoints: latency-svc-4w2h4 [12.216480901s] May 12 07:51:49.622: INFO: Created: latency-svc-k5rpk May 12 07:51:49.716: INFO: Got endpoints: latency-svc-k5rpk [11.744767808s] May 12 07:51:49.756: INFO: Created: latency-svc-lp2hx May 12 07:51:49.772: INFO: Got endpoints: latency-svc-lp2hx [10.953499567s] May 12 07:51:49.797: INFO: Created: latency-svc-zjpq2 May 12 07:51:49.809: INFO: Got endpoints: latency-svc-zjpq2 [9.013658187s] May 12 07:51:49.872: INFO: Created: latency-svc-cg57f May 12 07:51:49.881: INFO: Got endpoints: latency-svc-cg57f [7.779914015s] May 12 07:51:49.915: INFO: Created: latency-svc-dqp9p May 12 07:51:49.929: INFO: Got endpoints: latency-svc-dqp9p [5.392741044s] May 12 07:51:49.947: INFO: Created: latency-svc-hfhz2 May 12 07:51:49.959: INFO: Got endpoints: latency-svc-hfhz2 [5.421679805s] May 12 07:51:50.076: INFO: Created: latency-svc-h6x4d May 12 07:51:50.079: INFO: Got endpoints: latency-svc-h6x4d [4.615336658s] May 12 07:51:50.572: INFO: Created: latency-svc-cdwpd May 12 07:51:50.577: INFO: Got endpoints: latency-svc-cdwpd [4.31505393s] May 12 07:51:50.612: INFO: Created: latency-svc-gbxn8 May 12 07:51:50.621: INFO: Got endpoints: latency-svc-gbxn8 [3.808102096s] May 12 07:51:50.681: INFO: Created: latency-svc-npvkt May 12 07:51:50.684: INFO: Got endpoints: latency-svc-npvkt [3.530018284s] May 12 07:51:50.739: INFO: Created: latency-svc-vcf47 May 12 07:51:50.752: INFO: Got endpoints: latency-svc-vcf47 [2.765695567s] May 12 07:51:50.920: INFO: Created: latency-svc-tsw2t May 12 07:51:50.930: INFO: Got endpoints: latency-svc-tsw2t [2.069847803s] May 12 07:51:51.095: INFO: Created: latency-svc-jcdt8 May 12 07:51:51.096: INFO: Got endpoints: latency-svc-jcdt8 [1.857821812s] May 12 07:51:51.159: INFO: Created: latency-svc-7gsxn May 12 07:51:51.166: INFO: Got endpoints: latency-svc-7gsxn [1.6648715s] May 12 07:51:51.286: INFO: Created: latency-svc-fpdrd May 12 07:51:51.304: INFO: Got endpoints: latency-svc-fpdrd [1.730004788s] May 12 07:51:51.320: INFO: Created: latency-svc-2lhhw May 12 07:51:51.347: INFO: Got endpoints: latency-svc-2lhhw [1.630955321s] May 12 07:51:51.430: INFO: Created: latency-svc-c9xxv May 12 07:51:51.432: INFO: Got endpoints: latency-svc-c9xxv [1.660576543s] May 12 07:51:51.469: INFO: Created: latency-svc-dqz4p May 12 07:51:51.499: INFO: Got endpoints: latency-svc-dqz4p [1.689662732s] May 12 07:51:51.586: INFO: Created: latency-svc-2cg88 May 12 07:51:51.589: INFO: Got endpoints: latency-svc-2cg88 [1.708066975s] May 12 07:51:51.644: INFO: Created: latency-svc-4whlq May 12 07:51:51.661: INFO: Got endpoints: latency-svc-4whlq [1.731942155s] May 12 07:51:51.723: INFO: Created: latency-svc-zd46m May 12 07:51:51.732: INFO: Got endpoints: latency-svc-zd46m [1.77273536s] May 12 07:51:51.766: INFO: Created: latency-svc-vn85b May 12 07:51:51.780: INFO: Got endpoints: latency-svc-vn85b [1.700539033s] May 12 07:51:51.808: INFO: Created: latency-svc-8vmsz May 12 07:51:51.822: INFO: Got endpoints: latency-svc-8vmsz [1.244945831s] May 12 07:51:51.881: INFO: Created: latency-svc-9q4sj May 12 07:51:51.888: INFO: Got endpoints: latency-svc-9q4sj [1.267121466s] May 12 07:51:51.921: INFO: Created: latency-svc-dgd42 May 12 07:51:51.930: INFO: Got endpoints: latency-svc-dgd42 [1.246137252s] May 12 07:51:51.951: INFO: Created: latency-svc-b5xx5 May 12 07:51:51.967: INFO: Got endpoints: latency-svc-b5xx5 [1.215364692s] May 12 07:51:52.035: INFO: Created: latency-svc-vw4mc May 12 07:51:52.037: INFO: Got endpoints: latency-svc-vw4mc [1.106815088s] May 12 07:51:52.078: INFO: Created: latency-svc-lfhvx May 12 07:51:52.088: INFO: Got endpoints: latency-svc-lfhvx [991.628284ms] May 12 07:51:52.107: INFO: Created: latency-svc-6fl7l May 12 07:51:52.118: INFO: Got endpoints: latency-svc-6fl7l [951.443125ms] May 12 07:51:52.184: INFO: Created: latency-svc-fmvkt May 12 07:51:52.203: INFO: Got endpoints: latency-svc-fmvkt [898.664612ms] May 12 07:51:52.264: INFO: Created: latency-svc-z7dbr May 12 07:51:52.369: INFO: Got endpoints: latency-svc-z7dbr [1.021892636s] May 12 07:51:52.400: INFO: Created: latency-svc-jdsqk May 12 07:51:52.425: INFO: Got endpoints: latency-svc-jdsqk [992.721921ms] May 12 07:51:52.550: INFO: Created: latency-svc-wvdwk May 12 07:51:52.593: INFO: Got endpoints: latency-svc-wvdwk [1.09429044s] May 12 07:51:52.622: INFO: Created: latency-svc-pn5wn May 12 07:51:52.635: INFO: Got endpoints: latency-svc-pn5wn [1.045873416s] May 12 07:51:52.711: INFO: Created: latency-svc-ttwn6 May 12 07:51:52.719: INFO: Got endpoints: latency-svc-ttwn6 [1.057846965s] May 12 07:51:52.750: INFO: Created: latency-svc-znflp May 12 07:51:52.763: INFO: Got endpoints: latency-svc-znflp [1.031186607s] May 12 07:51:52.797: INFO: Created: latency-svc-lh6m7 May 12 07:51:52.932: INFO: Got endpoints: latency-svc-lh6m7 [1.152167263s] May 12 07:51:52.935: INFO: Created: latency-svc-wdgjc May 12 07:51:52.968: INFO: Got endpoints: latency-svc-wdgjc [1.145175872s] May 12 07:51:53.027: INFO: Created: latency-svc-gcclb May 12 07:51:53.238: INFO: Got endpoints: latency-svc-gcclb [1.350096211s] May 12 07:51:53.544: INFO: Created: latency-svc-cqxn8 May 12 07:51:53.869: INFO: Got endpoints: latency-svc-cqxn8 [1.938836681s] May 12 07:51:53.872: INFO: Created: latency-svc-j8rs4 May 12 07:51:53.952: INFO: Got endpoints: latency-svc-j8rs4 [1.984606235s] May 12 07:51:54.166: INFO: Created: latency-svc-xlgld May 12 07:51:54.208: INFO: Created: latency-svc-cfjdp May 12 07:51:54.258: INFO: Got endpoints: latency-svc-cfjdp [2.169808427s] May 12 07:51:54.258: INFO: Got endpoints: latency-svc-xlgld [2.220391498s] May 12 07:51:54.364: INFO: Created: latency-svc-sjkhd May 12 07:51:54.371: INFO: Got endpoints: latency-svc-sjkhd [2.253684256s] May 12 07:51:54.371: INFO: Latencies: [176.219053ms 332.896839ms 553.389835ms 728.236287ms 735.612589ms 774.126824ms 779.423161ms 780.781032ms 783.172643ms 783.836693ms 792.376215ms 818.666173ms 819.813265ms 820.314536ms 868.939939ms 883.877381ms 892.668006ms 896.561896ms 898.664612ms 907.470529ms 951.443125ms 976.233211ms 987.487749ms 987.998637ms 991.628284ms 992.721921ms 1.0004435s 1.021892636s 1.029602813s 1.03111627s 1.031186607s 1.045873416s 1.057846965s 1.064337431s 1.077205399s 1.078222026s 1.083476767s 1.085047002s 1.086420267s 1.09429044s 1.105736574s 1.106815088s 1.129892086s 1.139403089s 1.141602106s 1.145175872s 1.152167263s 1.204581389s 1.215364692s 1.218268082s 1.219997892s 1.238657674s 1.244945831s 1.246137252s 1.26345538s 1.264453288s 1.267121466s 1.291182029s 1.306370282s 1.317614993s 1.320471758s 1.321712047s 1.324527361s 1.333175107s 1.350096211s 1.362883672s 1.365983448s 1.420061186s 1.436035742s 1.446839627s 1.452723113s 1.472583057s 1.473104757s 1.477654498s 1.479936873s 1.493547047s 1.556158338s 1.560346524s 1.579415925s 1.592058421s 1.630955321s 1.635116786s 1.655185646s 1.660576543s 1.6648715s 1.689662732s 1.700539033s 1.708066975s 1.730004788s 1.731942155s 1.761588052s 1.765512404s 1.77273536s 1.857821812s 1.897078541s 1.938836681s 1.964021659s 1.984606235s 2.061757733s 2.069847803s 2.169808427s 2.176959911s 2.181166266s 2.195553382s 2.220391498s 2.231721934s 2.247466706s 2.253684256s 2.264097266s 2.264608536s 2.338769891s 2.348979056s 2.359635778s 2.381784065s 2.402592621s 2.429071671s 2.436424888s 2.562947408s 2.580859544s 2.635281006s 2.685344032s 2.708278277s 2.724099689s 2.726312188s 2.731198516s 2.76025951s 2.765695567s 2.777987602s 2.877942044s 2.904044549s 2.911003345s 2.914166931s 3.005708094s 3.048027433s 3.048274617s 3.162130156s 3.189634033s 3.191920406s 3.209045227s 3.235328702s 3.23708701s 3.260226696s 3.311263653s 3.387889188s 3.436943653s 3.530018284s 3.709008397s 3.770217746s 3.808102096s 3.870409011s 3.917015796s 3.928539626s 3.958489067s 3.997078228s 4.150069233s 4.226633407s 4.277424993s 4.31505393s 4.323913976s 4.326937286s 4.335368403s 4.430494562s 4.431763981s 4.492986279s 4.615336658s 4.965721003s 5.080034897s 5.209360359s 5.324091388s 5.392741044s 5.421679805s 5.451442317s 5.609513064s 5.704630008s 5.787157146s 5.814062565s 5.839713232s 5.893305429s 5.895872856s 5.916221817s 5.999390911s 6.085057868s 6.735240906s 7.779914015s 8.538487074s 9.013658187s 9.777517059s 10.953499567s 11.744767808s 11.804567785s 11.939578405s 12.216480901s 12.567919795s 13.28869244s 13.310487309s 13.319646466s 13.55606473s 13.562779536s 13.584182968s 13.8924645s] May 12 07:51:54.372: INFO: 50 %ile: 2.169808427s May 12 07:51:54.372: INFO: 90 %ile: 5.999390911s May 12 07:51:54.372: INFO: 99 %ile: 13.584182968s May 12 07:51:54.372: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:51:54.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-cx2fk" for this suite. May 12 07:53:00.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:53:00.447: INFO: namespace: e2e-tests-svc-latency-cx2fk, resource: bindings, ignored listing per whitelist May 12 07:53:00.471: INFO: namespace e2e-tests-svc-latency-cx2fk deletion completed in 1m6.074365877s • [SLOW TEST:116.364 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:53:00.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 12 07:53:00.715: INFO: Waiting up to 5m0s for pod "client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-containers-xfflx" to be "success or failure" May 12 07:53:00.731: INFO: Pod "client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.723093ms May 12 07:53:03.121: INFO: Pod "client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405895341s May 12 07:53:05.126: INFO: Pod "client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410069179s May 12 07:53:07.129: INFO: Pod "client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413497514s STEP: Saw pod success May 12 07:53:07.129: INFO: Pod "client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:53:07.131: INFO: Trying to get logs from node hunter-worker pod client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:53:07.470: INFO: Waiting for pod client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:53:07.676: INFO: Pod client-containers-9a02399c-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:53:07.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-xfflx" for this suite. May 12 07:53:13.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:53:13.806: INFO: namespace: e2e-tests-containers-xfflx, resource: bindings, ignored listing per whitelist May 12 07:53:13.822: INFO: namespace e2e-tests-containers-xfflx deletion completed in 6.141457708s • [SLOW TEST:13.351 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:53:13.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 07:53:14.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-mc7tz" to be "success or failure" May 12 07:53:14.143: INFO: Pod "downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.921311ms May 12 07:53:16.155: INFO: Pod "downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030219823s May 12 07:53:18.162: INFO: Pod "downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.037372854s May 12 07:53:20.197: INFO: Pod "downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072307333s STEP: Saw pod success May 12 07:53:20.197: INFO: Pod "downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:53:20.199: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 07:53:20.256: INFO: Waiting for pod downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:53:20.268: INFO: Pod downwardapi-volume-a205c7df-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:53:20.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mc7tz" for this suite. May 12 07:53:26.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:53:26.308: INFO: namespace: e2e-tests-downward-api-mc7tz, resource: bindings, ignored listing per whitelist May 12 07:53:26.352: INFO: namespace e2e-tests-downward-api-mc7tz deletion completed in 6.078998585s • [SLOW TEST:12.530 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:53:26.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a96b4c95-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:53:26.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-ff6p4" to be "success or failure" May 12 07:53:26.611: INFO: Pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.578566ms May 12 07:53:28.839: INFO: Pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263953509s May 12 07:53:30.845: INFO: Pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269533767s May 12 07:53:32.848: INFO: Pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.273279504s May 12 07:53:34.852: INFO: Pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.276608293s STEP: Saw pod success May 12 07:53:34.852: INFO: Pod "pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:53:34.854: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 07:53:34.965: INFO: Waiting for pod pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:53:35.033: INFO: Pod pod-configmaps-a96d1b7d-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:53:35.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ff6p4" for this suite. May 12 07:53:45.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:53:45.213: INFO: namespace: e2e-tests-configmap-ff6p4, resource: bindings, ignored listing per whitelist May 12 07:53:45.223: INFO: namespace e2e-tests-configmap-ff6p4 deletion completed in 10.096339233s • [SLOW TEST:18.872 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:53:45.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 07:53:46.866: INFO: Waiting up to 5m0s for pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-sc8cv" to be "success or failure" May 12 07:53:47.546: INFO: Pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 679.47651ms May 12 07:53:49.588: INFO: Pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721619935s May 12 07:53:51.719: INFO: Pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.853112702s May 12 07:53:53.722: INFO: Pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.856316639s May 12 07:53:55.726: INFO: Pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.859455088s STEP: Saw pod success May 12 07:53:55.726: INFO: Pod "downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:53:55.727: INFO: Trying to get logs from node hunter-worker pod downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 07:53:55.745: INFO: Waiting for pod downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:53:55.760: INFO: Pod downward-api-b56b5510-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:53:55.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sc8cv" for this suite. May 12 07:54:01.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:54:01.835: INFO: namespace: e2e-tests-downward-api-sc8cv, resource: bindings, ignored listing per whitelist May 12 07:54:01.869: INFO: namespace e2e-tests-downward-api-sc8cv deletion completed in 6.105983835s • [SLOW TEST:16.645 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:54:01.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-be8c7661-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 07:54:01.998: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-79lhc" to be "success or failure" May 12 07:54:02.002: INFO: Pod "pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.241008ms May 12 07:54:04.006: INFO: Pod "pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007799782s May 12 07:54:06.011: INFO: Pod "pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.012895885s May 12 07:54:08.031: INFO: Pod "pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032523796s STEP: Saw pod success May 12 07:54:08.031: INFO: Pod "pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:54:08.034: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 07:54:08.052: INFO: Waiting for pod pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:54:08.056: INFO: Pod pod-projected-secrets-be8e134b-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:54:08.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-79lhc" for this suite. May 12 07:54:14.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:54:14.096: INFO: namespace: e2e-tests-projected-79lhc, resource: bindings, ignored listing per whitelist May 12 07:54:14.135: INFO: namespace e2e-tests-projected-79lhc deletion completed in 6.07585486s • [SLOW TEST:12.266 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:54:14.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zmzwc STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 07:54:14.259: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 07:54:42.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.115:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zmzwc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 07:54:42.388: INFO: >>> kubeConfig: /root/.kube/config I0512 07:54:42.411175 6 log.go:172] (0xc0005d8d10) (0xc002058280) Create stream I0512 07:54:42.411202 6 log.go:172] (0xc0005d8d10) (0xc002058280) Stream added, broadcasting: 1 I0512 07:54:42.413864 6 log.go:172] (0xc0005d8d10) Reply frame received for 1 I0512 07:54:42.413899 6 log.go:172] (0xc0005d8d10) (0xc001e34000) Create stream I0512 07:54:42.413911 6 log.go:172] (0xc0005d8d10) (0xc001e34000) Stream added, broadcasting: 3 I0512 07:54:42.414844 6 log.go:172] (0xc0005d8d10) Reply frame received for 3 I0512 07:54:42.414874 6 log.go:172] (0xc0005d8d10) (0xc002058320) Create stream I0512 07:54:42.414889 6 log.go:172] (0xc0005d8d10) (0xc002058320) Stream added, broadcasting: 5 I0512 07:54:42.415737 6 log.go:172] (0xc0005d8d10) Reply frame received for 5 I0512 07:54:42.474108 6 log.go:172] (0xc0005d8d10) Data frame received for 3 I0512 07:54:42.474162 6 log.go:172] (0xc001e34000) (3) Data frame handling I0512 07:54:42.474195 6 log.go:172] (0xc001e34000) (3) Data frame sent I0512 07:54:42.478602 6 log.go:172] (0xc0005d8d10) Data frame received for 3 I0512 07:54:42.478618 6 log.go:172] (0xc001e34000) (3) Data frame handling I0512 07:54:42.478883 6 log.go:172] (0xc0005d8d10) Data frame received for 5 I0512 07:54:42.478897 6 log.go:172] (0xc002058320) (5) Data frame handling I0512 07:54:42.482457 6 log.go:172] (0xc0005d8d10) Data frame received for 1 I0512 07:54:42.482477 6 log.go:172] (0xc002058280) (1) Data frame handling I0512 07:54:42.482492 6 log.go:172] (0xc002058280) (1) Data frame sent I0512 07:54:42.482505 6 log.go:172] (0xc0005d8d10) (0xc002058280) Stream removed, broadcasting: 1 I0512 07:54:42.482607 6 log.go:172] (0xc0005d8d10) (0xc002058280) Stream removed, broadcasting: 1 I0512 07:54:42.482627 6 log.go:172] (0xc0005d8d10) (0xc001e34000) Stream removed, broadcasting: 3 I0512 07:54:42.482737 6 log.go:172] (0xc0005d8d10) (0xc002058320) Stream removed, broadcasting: 5 May 12 07:54:42.483: INFO: Found all expected endpoints: [netserver-0] May 12 07:54:42.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.38:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zmzwc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 07:54:42.486: INFO: >>> kubeConfig: /root/.kube/config I0512 07:54:42.507852 6 log.go:172] (0xc0011744d0) (0xc001e341e0) Create stream I0512 07:54:42.507884 6 log.go:172] (0xc0011744d0) (0xc001e341e0) Stream added, broadcasting: 1 I0512 07:54:42.510743 6 log.go:172] (0xc0011744d0) Reply frame received for 1 I0512 07:54:42.510787 6 log.go:172] (0xc0011744d0) (0xc001e34280) Create stream I0512 07:54:42.510801 6 log.go:172] (0xc0011744d0) (0xc001e34280) Stream added, broadcasting: 3 I0512 07:54:42.511476 6 log.go:172] (0xc0011744d0) Reply frame received for 3 I0512 07:54:42.511507 6 log.go:172] (0xc0011744d0) (0xc002126000) Create stream I0512 07:54:42.511519 6 log.go:172] (0xc0011744d0) (0xc002126000) Stream added, broadcasting: 5 I0512 07:54:42.512201 6 log.go:172] (0xc0011744d0) Reply frame received for 5 I0512 07:54:42.584356 6 log.go:172] (0xc0011744d0) Data frame received for 5 I0512 07:54:42.584393 6 log.go:172] (0xc002126000) (5) Data frame handling I0512 07:54:42.584420 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 07:54:42.584429 6 log.go:172] (0xc001e34280) (3) Data frame handling I0512 07:54:42.584438 6 log.go:172] (0xc001e34280) (3) Data frame sent I0512 07:54:42.584448 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 07:54:42.584457 6 log.go:172] (0xc001e34280) (3) Data frame handling I0512 07:54:42.586236 6 log.go:172] (0xc0011744d0) Data frame received for 1 I0512 07:54:42.586267 6 log.go:172] (0xc001e341e0) (1) Data frame handling I0512 07:54:42.586284 6 log.go:172] (0xc001e341e0) (1) Data frame sent I0512 07:54:42.586369 6 log.go:172] (0xc0011744d0) (0xc001e341e0) Stream removed, broadcasting: 1 I0512 07:54:42.586408 6 log.go:172] (0xc0011744d0) Go away received I0512 07:54:42.586497 6 log.go:172] (0xc0011744d0) (0xc001e341e0) Stream removed, broadcasting: 1 I0512 07:54:42.586522 6 log.go:172] (0xc0011744d0) (0xc001e34280) Stream removed, broadcasting: 3 I0512 07:54:42.586530 6 log.go:172] (0xc0011744d0) (0xc002126000) Stream removed, broadcasting: 5 May 12 07:54:42.586: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:54:42.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zmzwc" for this suite. May 12 07:55:06.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:55:06.663: INFO: namespace: e2e-tests-pod-network-test-zmzwc, resource: bindings, ignored listing per whitelist May 12 07:55:06.681: INFO: namespace e2e-tests-pod-network-test-zmzwc deletion completed in 24.090823236s • [SLOW TEST:52.546 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:55:06.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 12 07:55:07.033: INFO: Waiting up to 5m0s for pod "client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-containers-qx56g" to be "success or failure" May 12 07:55:07.071: INFO: Pod "client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 38.462436ms May 12 07:55:09.146: INFO: Pod "client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11321865s May 12 07:55:11.150: INFO: Pod "client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117533534s May 12 07:55:13.155: INFO: Pod "client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122076403s STEP: Saw pod success May 12 07:55:13.155: INFO: Pod "client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:55:13.159: INFO: Trying to get logs from node hunter-worker2 pod client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 07:55:13.293: INFO: Waiting for pod client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:55:13.303: INFO: Pod client-containers-e54ba8b8-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:55:13.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qx56g" for this suite. May 12 07:55:19.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:55:19.656: INFO: namespace: e2e-tests-containers-qx56g, resource: bindings, ignored listing per whitelist May 12 07:55:19.669: INFO: namespace e2e-tests-containers-qx56g deletion completed in 6.360225492s • [SLOW TEST:12.987 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:55:19.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ed1ede35-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:55:20.214: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-d54hh" to be "success or failure" May 12 07:55:20.280: INFO: Pod "pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 66.161408ms May 12 07:55:22.283: INFO: Pod "pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069235004s May 12 07:55:24.287: INFO: Pod "pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.073505344s May 12 07:55:26.292: INFO: Pod "pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077886996s STEP: Saw pod success May 12 07:55:26.292: INFO: Pod "pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:55:26.294: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 07:55:26.364: INFO: Waiting for pod pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:55:26.452: INFO: Pod pod-projected-configmaps-ed28b20a-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:55:26.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d54hh" for this suite. May 12 07:55:32.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:55:33.045: INFO: namespace: e2e-tests-projected-d54hh, resource: bindings, ignored listing per whitelist May 12 07:55:33.063: INFO: namespace e2e-tests-projected-d54hh deletion completed in 6.587545299s • [SLOW TEST:13.394 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:55:33.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-f5150d13-9425-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 07:55:33.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-h8pq5" to be "success or failure" May 12 07:55:33.506: INFO: Pod "pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.887664ms May 12 07:55:35.510: INFO: Pod "pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019842458s May 12 07:55:37.514: INFO: Pod "pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023249391s May 12 07:55:39.518: INFO: Pod "pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027333293s STEP: Saw pod success May 12 07:55:39.518: INFO: Pod "pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:55:39.521: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 07:55:39.557: INFO: Waiting for pod pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c to disappear May 12 07:55:39.602: INFO: Pod pod-configmaps-f515b996-9425-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:55:39.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-h8pq5" for this suite. May 12 07:55:45.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:55:45.655: INFO: namespace: e2e-tests-configmap-h8pq5, resource: bindings, ignored listing per whitelist May 12 07:55:45.700: INFO: namespace e2e-tests-configmap-h8pq5 deletion completed in 6.094095788s • [SLOW TEST:12.637 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:55:45.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 07:55:45.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-l9dbq' May 12 07:55:48.346: INFO: stderr: "" May 12 07:55:48.346: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 12 07:55:53.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-l9dbq -o json' May 12 07:55:53.523: INFO: stderr: "" May 12 07:55:53.523: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T07:55:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-l9dbq\",\n \"resourceVersion\": \"10119273\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-l9dbq/pods/e2e-test-nginx-pod\",\n \"uid\": \"fdf02c8d-9425-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-gwldd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-gwldd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-gwldd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T07:55:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T07:55:51Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T07:55:51Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T07:55:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://2cc04d58cab9c232f7909f34b2b22e3e9d14a310622772894a3fbbd49698e8de\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T07:55:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.118\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T07:55:48Z\"\n }\n}\n" STEP: replace the image in the pod May 12 07:55:53.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-l9dbq' May 12 07:55:53.822: INFO: stderr: "" May 12 07:55:53.822: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 12 07:55:53.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-l9dbq' May 12 07:55:57.772: INFO: stderr: "" May 12 07:55:57.772: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:55:57.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l9dbq" for this suite. May 12 07:56:03.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:56:03.904: INFO: namespace: e2e-tests-kubectl-l9dbq, resource: bindings, ignored listing per whitelist May 12 07:56:03.916: INFO: namespace e2e-tests-kubectl-l9dbq deletion completed in 6.092032393s • [SLOW TEST:18.216 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:56:03.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 07:56:04.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-l99qk' May 12 07:56:04.151: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 07:56:04.151: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 12 07:56:04.182: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-blc2f] May 12 07:56:04.182: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-blc2f" in namespace "e2e-tests-kubectl-l99qk" to be "running and ready" May 12 07:56:04.190: INFO: Pod "e2e-test-nginx-rc-blc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.764144ms May 12 07:56:06.195: INFO: Pod "e2e-test-nginx-rc-blc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01236144s May 12 07:56:08.218: INFO: Pod "e2e-test-nginx-rc-blc2f": Phase="Running", Reason="", readiness=true. Elapsed: 4.036241316s May 12 07:56:08.218: INFO: Pod "e2e-test-nginx-rc-blc2f" satisfied condition "running and ready" May 12 07:56:08.219: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-blc2f] May 12 07:56:08.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-l99qk' May 12 07:56:08.352: INFO: stderr: "" May 12 07:56:08.352: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 12 07:56:08.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-l99qk' May 12 07:56:08.465: INFO: stderr: "" May 12 07:56:08.465: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:56:08.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l99qk" for this suite. May 12 07:56:14.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:56:14.567: INFO: namespace: e2e-tests-kubectl-l99qk, resource: bindings, ignored listing per whitelist May 12 07:56:14.717: INFO: namespace e2e-tests-kubectl-l99qk deletion completed in 6.248111399s • [SLOW TEST:10.800 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:56:14.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 07:56:15.122: INFO: Waiting up to 5m0s for pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-mpq2s" to be "success or failure" May 12 07:56:15.399: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 277.13714ms May 12 07:56:17.437: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31496939s May 12 07:56:19.452: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330503231s May 12 07:56:21.457: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.334636028s May 12 07:56:23.627: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504849677s May 12 07:56:25.765: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642993188s May 12 07:56:27.997: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.87529919s May 12 07:56:30.001: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.879392833s STEP: Saw pod success May 12 07:56:30.001: INFO: Pod "downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 07:56:30.004: INFO: Trying to get logs from node hunter-worker pod downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 07:56:30.400: INFO: Waiting for pod downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c to disappear May 12 07:56:30.758: INFO: Pod downward-api-0dddbcf3-9426-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:56:30.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mpq2s" for this suite. May 12 07:56:37.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:56:37.070: INFO: namespace: e2e-tests-downward-api-mpq2s, resource: bindings, ignored listing per whitelist May 12 07:56:37.809: INFO: namespace e2e-tests-downward-api-mpq2s deletion completed in 7.046841832s • [SLOW TEST:23.092 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:56:37.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 12 07:56:38.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:39.530: INFO: stderr: "" May 12 07:56:39.530: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 07:56:39.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:39.837: INFO: stderr: "" May 12 07:56:39.837: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 12 07:56:44.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:44.944: INFO: stderr: "" May 12 07:56:44.944: INFO: stdout: "update-demo-nautilus-qjthl update-demo-nautilus-tsnfc " May 12 07:56:44.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjthl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:45.029: INFO: stderr: "" May 12 07:56:45.029: INFO: stdout: "" May 12 07:56:45.029: INFO: update-demo-nautilus-qjthl is created but not running May 12 07:56:50.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:50.150: INFO: stderr: "" May 12 07:56:50.150: INFO: stdout: "update-demo-nautilus-qjthl update-demo-nautilus-tsnfc " May 12 07:56:50.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjthl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:50.389: INFO: stderr: "" May 12 07:56:50.389: INFO: stdout: "true" May 12 07:56:50.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qjthl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:50.502: INFO: stderr: "" May 12 07:56:50.502: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:56:50.502: INFO: validating pod update-demo-nautilus-qjthl May 12 07:56:50.506: INFO: got data: { "image": "nautilus.jpg" } May 12 07:56:50.506: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:56:50.506: INFO: update-demo-nautilus-qjthl is verified up and running May 12 07:56:50.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsnfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:50.610: INFO: stderr: "" May 12 07:56:50.610: INFO: stdout: "true" May 12 07:56:50.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsnfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:50.712: INFO: stderr: "" May 12 07:56:50.712: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:56:50.712: INFO: validating pod update-demo-nautilus-tsnfc May 12 07:56:50.716: INFO: got data: { "image": "nautilus.jpg" } May 12 07:56:50.716: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:56:50.716: INFO: update-demo-nautilus-tsnfc is verified up and running STEP: scaling down the replication controller May 12 07:56:50.718: INFO: scanned /root for discovery docs: May 12 07:56:50.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:51.878: INFO: stderr: "" May 12 07:56:51.878: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 07:56:51.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:51.979: INFO: stderr: "" May 12 07:56:51.980: INFO: stdout: "update-demo-nautilus-qjthl update-demo-nautilus-tsnfc " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 07:56:56.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:57.085: INFO: stderr: "" May 12 07:56:57.085: INFO: stdout: "update-demo-nautilus-tsnfc " May 12 07:56:57.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsnfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:57.428: INFO: stderr: "" May 12 07:56:57.428: INFO: stdout: "true" May 12 07:56:57.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsnfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:57.515: INFO: stderr: "" May 12 07:56:57.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:56:57.515: INFO: validating pod update-demo-nautilus-tsnfc May 12 07:56:57.517: INFO: got data: { "image": "nautilus.jpg" } May 12 07:56:57.517: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:56:57.517: INFO: update-demo-nautilus-tsnfc is verified up and running STEP: scaling up the replication controller May 12 07:56:57.519: INFO: scanned /root for discovery docs: May 12 07:56:57.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:58.744: INFO: stderr: "" May 12 07:56:58.744: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 07:56:58.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:59.039: INFO: stderr: "" May 12 07:56:59.039: INFO: stdout: "update-demo-nautilus-fzz7s update-demo-nautilus-tsnfc " May 12 07:56:59.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzz7s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:56:59.149: INFO: stderr: "" May 12 07:56:59.149: INFO: stdout: "" May 12 07:56:59.149: INFO: update-demo-nautilus-fzz7s is created but not running May 12 07:57:04.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:04.253: INFO: stderr: "" May 12 07:57:04.253: INFO: stdout: "update-demo-nautilus-fzz7s update-demo-nautilus-tsnfc " May 12 07:57:04.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzz7s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:04.345: INFO: stderr: "" May 12 07:57:04.345: INFO: stdout: "true" May 12 07:57:04.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fzz7s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:04.453: INFO: stderr: "" May 12 07:57:04.453: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:57:04.453: INFO: validating pod update-demo-nautilus-fzz7s May 12 07:57:04.458: INFO: got data: { "image": "nautilus.jpg" } May 12 07:57:04.458: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:57:04.458: INFO: update-demo-nautilus-fzz7s is verified up and running May 12 07:57:04.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsnfc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:04.546: INFO: stderr: "" May 12 07:57:04.546: INFO: stdout: "true" May 12 07:57:04.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tsnfc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:04.897: INFO: stderr: "" May 12 07:57:04.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 07:57:04.897: INFO: validating pod update-demo-nautilus-tsnfc May 12 07:57:04.959: INFO: got data: { "image": "nautilus.jpg" } May 12 07:57:04.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 07:57:04.959: INFO: update-demo-nautilus-tsnfc is verified up and running STEP: using delete to clean up resources May 12 07:57:04.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:05.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 07:57:05.081: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 07:57:05.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-br8m2' May 12 07:57:05.816: INFO: stderr: "No resources found.\n" May 12 07:57:05.816: INFO: stdout: "" May 12 07:57:05.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-br8m2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 07:57:06.064: INFO: stderr: "" May 12 07:57:06.064: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:57:06.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-br8m2" for this suite. May 12 07:57:18.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:57:18.734: INFO: namespace: e2e-tests-kubectl-br8m2, resource: bindings, ignored listing per whitelist May 12 07:57:18.743: INFO: namespace e2e-tests-kubectl-br8m2 deletion completed in 12.674816523s • [SLOW TEST:40.934 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:57:18.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-34269956-9426-11ea-bb6f-0242ac11001c STEP: Creating secret with name s-test-opt-upd-342699b4-9426-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-34269956-9426-11ea-bb6f-0242ac11001c STEP: Updating secret s-test-opt-upd-342699b4-9426-11ea-bb6f-0242ac11001c STEP: Creating secret with name s-test-opt-create-342699d6-9426-11ea-bb6f-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:58:49.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-czwnr" for this suite. May 12 07:59:13.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:59:13.241: INFO: namespace: e2e-tests-projected-czwnr, resource: bindings, ignored listing per whitelist May 12 07:59:13.521: INFO: namespace e2e-tests-projected-czwnr deletion completed in 24.333403739s • [SLOW TEST:114.778 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:59:13.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 12 07:59:13.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 12 07:59:13.855: INFO: stderr: "" May 12 07:59:13.855: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:59:13.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z72vg" for this suite. May 12 07:59:19.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:59:19.936: INFO: namespace: e2e-tests-kubectl-z72vg, resource: bindings, ignored listing per whitelist May 12 07:59:19.966: INFO: namespace e2e-tests-kubectl-z72vg deletion completed in 6.107340258s • [SLOW TEST:6.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:59:19.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:59:32.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ln6kh" for this suite. May 12 07:59:39.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 07:59:39.564: INFO: namespace: e2e-tests-emptydir-wrapper-ln6kh, resource: bindings, ignored listing per whitelist May 12 07:59:39.618: INFO: namespace e2e-tests-emptydir-wrapper-ln6kh deletion completed in 6.448534621s • [SLOW TEST:19.651 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 07:59:39.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 07:59:48.024: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-87f647f7-9426-11ea-bb6f-0242ac11001c,GenerateName:,Namespace:e2e-tests-events-mpmlv,SelfLink:/api/v1/namespaces/e2e-tests-events-mpmlv/pods/send-events-87f647f7-9426-11ea-bb6f-0242ac11001c,UID:87fa14b7-9426-11ea-99e8-0242ac110002,ResourceVersion:10119960,Generation:0,CreationTimestamp:2020-05-12 07:59:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 895405778,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-44qk7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-44qk7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-44qk7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001aa5e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001aa5eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:59:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:59:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:59:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 07:59:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.45,StartTime:2020-05-12 07:59:40 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-12 07:59:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://3c909564306caa97eae387a19bc0a33375fe427327edb1cb1a4316e70c57119a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 12 07:59:50.030: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 07:59:52.034: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 07:59:52.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-mpmlv" for this suite. May 12 08:00:32.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:00:32.274: INFO: namespace: e2e-tests-events-mpmlv, resource: bindings, ignored listing per whitelist May 12 08:00:32.314: INFO: namespace e2e-tests-events-mpmlv deletion completed in 40.150595168s • [SLOW TEST:52.695 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:00:32.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 12 08:00:32.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:33.310: INFO: stderr: "" May 12 08:00:33.310: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 08:00:33.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:33.500: INFO: stderr: "" May 12 08:00:33.500: INFO: stdout: "update-demo-nautilus-mrw5f update-demo-nautilus-ps2lx " May 12 08:00:33.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrw5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:33.644: INFO: stderr: "" May 12 08:00:33.644: INFO: stdout: "" May 12 08:00:33.644: INFO: update-demo-nautilus-mrw5f is created but not running May 12 08:00:38.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:38.746: INFO: stderr: "" May 12 08:00:38.746: INFO: stdout: "update-demo-nautilus-mrw5f update-demo-nautilus-ps2lx " May 12 08:00:38.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrw5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:38.935: INFO: stderr: "" May 12 08:00:38.935: INFO: stdout: "" May 12 08:00:38.935: INFO: update-demo-nautilus-mrw5f is created but not running May 12 08:00:43.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.043: INFO: stderr: "" May 12 08:00:44.043: INFO: stdout: "update-demo-nautilus-mrw5f update-demo-nautilus-ps2lx " May 12 08:00:44.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrw5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.128: INFO: stderr: "" May 12 08:00:44.128: INFO: stdout: "true" May 12 08:00:44.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrw5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.236: INFO: stderr: "" May 12 08:00:44.236: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 08:00:44.236: INFO: validating pod update-demo-nautilus-mrw5f May 12 08:00:44.240: INFO: got data: { "image": "nautilus.jpg" } May 12 08:00:44.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 08:00:44.240: INFO: update-demo-nautilus-mrw5f is verified up and running May 12 08:00:44.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps2lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.391: INFO: stderr: "" May 12 08:00:44.391: INFO: stdout: "true" May 12 08:00:44.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ps2lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.494: INFO: stderr: "" May 12 08:00:44.494: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 08:00:44.494: INFO: validating pod update-demo-nautilus-ps2lx May 12 08:00:44.498: INFO: got data: { "image": "nautilus.jpg" } May 12 08:00:44.498: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 08:00:44.498: INFO: update-demo-nautilus-ps2lx is verified up and running STEP: using delete to clean up resources May 12 08:00:44.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:00:44.607: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 08:00:44.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-56hn9' May 12 08:00:44.885: INFO: stderr: "No resources found.\n" May 12 08:00:44.885: INFO: stdout: "" May 12 08:00:44.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-56hn9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 08:00:44.986: INFO: stderr: "" May 12 08:00:44.986: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:00:44.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-56hn9" for this suite. May 12 08:01:09.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:01:10.960: INFO: namespace: e2e-tests-kubectl-56hn9, resource: bindings, ignored listing per whitelist May 12 08:01:10.992: INFO: namespace e2e-tests-kubectl-56hn9 deletion completed in 25.886768077s • [SLOW TEST:38.677 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:01:10.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 12 08:01:18.491: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-bef52462-9426-11ea-bb6f-0242ac11001c", GenerateName:"", Namespace:"e2e-tests-pods-85hgz", SelfLink:"/api/v1/namespaces/e2e-tests-pods-85hgz/pods/pod-submit-remove-bef52462-9426-11ea-bb6f-0242ac11001c", UID:"befc2e00-9426-11ea-99e8-0242ac110002", ResourceVersion:"10120208", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724867272, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"162627928"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-p9djp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020c9f80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p9djp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00230a558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0018c74a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00230a5a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00230a5c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00230a5c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00230a5cc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724867273, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724867278, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724867278, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724867272, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.123", StartTime:(*v1.Time)(0xc0013c89e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0013c8a00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://f9b7a4f407854d78e8274b3a49b4e13dd3f232ae6f2d2995d6415dea910521d2"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:01:24.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-85hgz" for this suite. May 12 08:01:32.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:01:32.202: INFO: namespace: e2e-tests-pods-85hgz, resource: bindings, ignored listing per whitelist May 12 08:01:32.254: INFO: namespace e2e-tests-pods-85hgz deletion completed in 8.106387747s • [SLOW TEST:21.262 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:01:32.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-4rwm STEP: Creating a pod to test atomic-volume-subpath May 12 08:01:32.676: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4rwm" in namespace "e2e-tests-subpath-scbcg" to be "success or failure" May 12 08:01:32.702: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Pending", Reason="", readiness=false. Elapsed: 26.23863ms May 12 08:01:34.762: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085718024s May 12 08:01:36.836: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159772614s May 12 08:01:38.840: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163580516s May 12 08:01:40.907: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230898634s May 12 08:01:42.911: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.234598345s May 12 08:01:44.914: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 12.238240108s May 12 08:01:46.919: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 14.243467614s May 12 08:01:48.923: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 16.247541893s May 12 08:01:50.927: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 18.251316041s May 12 08:01:52.931: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 20.255015842s May 12 08:01:54.934: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 22.25827254s May 12 08:01:56.939: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 24.262753068s May 12 08:01:58.943: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Running", Reason="", readiness=false. Elapsed: 26.26703305s May 12 08:02:00.947: INFO: Pod "pod-subpath-test-projected-4rwm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.271449814s STEP: Saw pod success May 12 08:02:00.947: INFO: Pod "pod-subpath-test-projected-4rwm" satisfied condition "success or failure" May 12 08:02:00.950: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-4rwm container test-container-subpath-projected-4rwm: STEP: delete the pod May 12 08:02:00.977: INFO: Waiting for pod pod-subpath-test-projected-4rwm to disappear May 12 08:02:00.983: INFO: Pod pod-subpath-test-projected-4rwm no longer exists STEP: Deleting pod pod-subpath-test-projected-4rwm May 12 08:02:00.983: INFO: Deleting pod "pod-subpath-test-projected-4rwm" in namespace "e2e-tests-subpath-scbcg" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:02:00.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-scbcg" for this suite. May 12 08:02:09.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:02:09.033: INFO: namespace: e2e-tests-subpath-scbcg, resource: bindings, ignored listing per whitelist May 12 08:02:09.067: INFO: namespace e2e-tests-subpath-scbcg deletion completed in 8.079051741s • [SLOW TEST:36.813 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:02:09.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 08:02:16.847: INFO: Successfully updated pod "annotationupdatee188eb57-9426-11ea-bb6f-0242ac11001c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:02:19.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tvvg6" for this suite. May 12 08:02:43.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:02:43.056: INFO: namespace: e2e-tests-downward-api-tvvg6, resource: bindings, ignored listing per whitelist May 12 08:02:43.123: INFO: namespace e2e-tests-downward-api-tvvg6 deletion completed in 24.092378166s • [SLOW TEST:34.056 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:02:43.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:02:43.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-5lm67" to be "success or failure" May 12 08:02:43.786: INFO: Pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.834684ms May 12 08:02:45.790: INFO: Pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018412784s May 12 08:02:47.794: INFO: Pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021687347s May 12 08:02:49.872: INFO: Pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099834125s May 12 08:02:51.875: INFO: Pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103202679s STEP: Saw pod success May 12 08:02:51.875: INFO: Pod "downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:02:51.877: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:02:52.118: INFO: Waiting for pod downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c to disappear May 12 08:02:52.211: INFO: Pod downwardapi-volume-f582cbcc-9426-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:02:52.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5lm67" for this suite. May 12 08:03:00.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:03:00.608: INFO: namespace: e2e-tests-projected-5lm67, resource: bindings, ignored listing per whitelist May 12 08:03:01.063: INFO: namespace e2e-tests-projected-5lm67 deletion completed in 8.849548551s • [SLOW TEST:17.940 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:03:01.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 08:03:02.223: INFO: Waiting up to 5m0s for pod "downward-api-0076314e-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-ftpbt" to be "success or failure" May 12 08:03:02.411: INFO: Pod "downward-api-0076314e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 187.976062ms May 12 08:03:04.415: INFO: Pod "downward-api-0076314e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19251233s May 12 08:03:06.419: INFO: Pod "downward-api-0076314e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196108929s May 12 08:03:08.422: INFO: Pod "downward-api-0076314e-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199617865s STEP: Saw pod success May 12 08:03:08.423: INFO: Pod "downward-api-0076314e-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:03:08.424: INFO: Trying to get logs from node hunter-worker pod downward-api-0076314e-9427-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 08:03:08.841: INFO: Waiting for pod downward-api-0076314e-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:03:08.926: INFO: Pod downward-api-0076314e-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:03:08.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ftpbt" for this suite. May 12 08:03:16.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:03:17.030: INFO: namespace: e2e-tests-downward-api-ftpbt, resource: bindings, ignored listing per whitelist May 12 08:03:17.047: INFO: namespace e2e-tests-downward-api-ftpbt deletion completed in 8.116584235s • [SLOW TEST:15.983 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:03:17.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 08:03:17.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-vr7ml' May 12 08:03:17.445: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 08:03:17.445: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 12 08:03:17.580: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 12 08:03:17.601: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 12 08:03:17.827: INFO: scanned /root for discovery docs: May 12 08:03:17.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vr7ml' May 12 08:03:35.379: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 08:03:35.379: INFO: stdout: "Created e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1\nScaling up e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 12 08:03:35.379: INFO: stdout: "Created e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1\nScaling up e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 12 08:03:35.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vr7ml' May 12 08:03:35.507: INFO: stderr: "" May 12 08:03:35.507: INFO: stdout: "e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1-9wdzc " May 12 08:03:35.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1-9wdzc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vr7ml' May 12 08:03:35.707: INFO: stderr: "" May 12 08:03:35.707: INFO: stdout: "true" May 12 08:03:35.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1-9wdzc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vr7ml' May 12 08:03:35.794: INFO: stderr: "" May 12 08:03:35.794: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 12 08:03:35.794: INFO: e2e-test-nginx-rc-29e5e1057850e9fd5d0c61d3cebd10f1-9wdzc is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 12 08:03:35.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-vr7ml' May 12 08:03:35.931: INFO: stderr: "" May 12 08:03:35.931: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:03:35.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vr7ml" for this suite. May 12 08:03:42.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:03:42.303: INFO: namespace: e2e-tests-kubectl-vr7ml, resource: bindings, ignored listing per whitelist May 12 08:03:42.343: INFO: namespace e2e-tests-kubectl-vr7ml deletion completed in 6.408263683s • [SLOW TEST:25.296 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:03:42.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:03:42.769: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 6.011243ms) May 12 08:03:42.773: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.378817ms) May 12 08:03:42.776: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.581781ms) May 12 08:03:42.778: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.516336ms) May 12 08:03:42.781: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.876734ms) May 12 08:03:42.783: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.35758ms) May 12 08:03:42.874: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 90.91823ms) May 12 08:03:42.878: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.944737ms) May 12 08:03:42.882: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.00193ms) May 12 08:03:42.885: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.698207ms) May 12 08:03:42.888: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.447649ms) May 12 08:03:42.890: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.264002ms) May 12 08:03:42.892: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.07139ms) May 12 08:03:42.894: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.078952ms) May 12 08:03:42.896: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.004615ms) May 12 08:03:42.898: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.290569ms) May 12 08:03:42.900: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.931507ms) May 12 08:03:42.903: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.355196ms) May 12 08:03:42.906: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.790325ms) May 12 08:03:42.908: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.545034ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:03:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-rf2vs" for this suite. May 12 08:03:48.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:03:49.036: INFO: namespace: e2e-tests-proxy-rf2vs, resource: bindings, ignored listing per whitelist May 12 08:03:49.079: INFO: namespace e2e-tests-proxy-rf2vs deletion completed in 6.168493887s • [SLOW TEST:6.736 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:03:49.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:03:49.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-v9p8g" to be "success or failure" May 12 08:03:49.706: INFO: Pod "downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 82.670835ms May 12 08:03:51.722: INFO: Pod "downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099018877s May 12 08:03:53.727: INFO: Pod "downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103650577s May 12 08:03:55.731: INFO: Pod "downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108036939s STEP: Saw pod success May 12 08:03:55.731: INFO: Pod "downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:03:55.734: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:03:55.917: INFO: Waiting for pod downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:03:55.947: INFO: Pod downwardapi-volume-1cc3e90a-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:03:55.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v9p8g" for this suite. May 12 08:04:02.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:04:02.164: INFO: namespace: e2e-tests-downward-api-v9p8g, resource: bindings, ignored listing per whitelist May 12 08:04:02.214: INFO: namespace e2e-tests-downward-api-v9p8g deletion completed in 6.248128411s • [SLOW TEST:13.134 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:04:02.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 12 08:04:06.554: INFO: Pod pod-hostip-2462d783-9427-11ea-bb6f-0242ac11001c has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:04:06.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-clpr4" for this suite. May 12 08:04:28.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:04:28.700: INFO: namespace: e2e-tests-pods-clpr4, resource: bindings, ignored listing per whitelist May 12 08:04:28.724: INFO: namespace e2e-tests-pods-clpr4 deletion completed in 22.165565157s • [SLOW TEST:26.510 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:04:28.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 08:04:28.837: INFO: Waiting up to 5m0s for pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-kf5h5" to be "success or failure" May 12 08:04:28.849: INFO: Pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.076839ms May 12 08:04:31.285: INFO: Pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447308315s May 12 08:04:33.376: INFO: Pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538864336s May 12 08:04:35.790: INFO: Pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.952894862s May 12 08:04:37.795: INFO: Pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.957006053s STEP: Saw pod success May 12 08:04:37.795: INFO: Pod "pod-342c84d5-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:04:37.798: INFO: Trying to get logs from node hunter-worker2 pod pod-342c84d5-9427-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:04:37.995: INFO: Waiting for pod pod-342c84d5-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:04:38.067: INFO: Pod pod-342c84d5-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:04:38.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kf5h5" for this suite. May 12 08:04:46.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:04:46.458: INFO: namespace: e2e-tests-emptydir-kf5h5, resource: bindings, ignored listing per whitelist May 12 08:04:46.464: INFO: namespace e2e-tests-emptydir-kf5h5 deletion completed in 8.393482095s • [SLOW TEST:17.738 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:04:46.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:04:46.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-p67f5" for this suite. May 12 08:04:54.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:04:56.614: INFO: namespace: e2e-tests-services-p67f5, resource: bindings, ignored listing per whitelist May 12 08:04:56.663: INFO: namespace e2e-tests-services-p67f5 deletion completed in 9.837865205s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:10.199 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:04:56.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:05:54.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-j9bsg" for this suite. May 12 08:06:02.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:06:02.200: INFO: namespace: e2e-tests-container-runtime-j9bsg, resource: bindings, ignored listing per whitelist May 12 08:06:02.222: INFO: namespace e2e-tests-container-runtime-j9bsg deletion completed in 8.159288462s • [SLOW TEST:65.559 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:06:02.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6c0d5bb9-9427-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:06:02.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-2xx4z" to be "success or failure" May 12 08:06:03.168: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 262.802271ms May 12 08:06:05.216: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310641461s May 12 08:06:07.569: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663927232s May 12 08:06:09.574: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.668812951s May 12 08:06:11.578: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.672695663s May 12 08:06:13.582: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 10.676417652s May 12 08:06:15.587: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.681099421s STEP: Saw pod success May 12 08:06:15.587: INFO: Pod "pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:06:15.589: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 08:06:15.723: INFO: Waiting for pod pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:06:15.736: INFO: Pod pod-projected-configmaps-6c37d50e-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:06:15.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2xx4z" for this suite. May 12 08:06:21.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:06:21.948: INFO: namespace: e2e-tests-projected-2xx4z, resource: bindings, ignored listing per whitelist May 12 08:06:21.948: INFO: namespace e2e-tests-projected-2xx4z deletion completed in 6.207754553s • [SLOW TEST:19.725 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:06:21.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:06:22.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-7dqv7" to be "success or failure" May 12 08:06:22.690: INFO: Pod "downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 89.678077ms May 12 08:06:24.803: INFO: Pod "downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203428558s May 12 08:06:26.807: INFO: Pod "downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207238281s May 12 08:06:28.894: INFO: Pod "downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293917279s STEP: Saw pod success May 12 08:06:28.894: INFO: Pod "downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:06:28.944: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:06:29.081: INFO: Waiting for pod downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:06:29.331: INFO: Pod downwardapi-volume-77f80635-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:06:29.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7dqv7" for this suite. May 12 08:06:37.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:06:38.068: INFO: namespace: e2e-tests-projected-7dqv7, resource: bindings, ignored listing per whitelist May 12 08:06:38.078: INFO: namespace e2e-tests-projected-7dqv7 deletion completed in 8.74321883s • [SLOW TEST:16.130 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:06:38.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:06:39.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-cw4kz" to be "success or failure" May 12 08:06:39.636: INFO: Pod "downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.876061ms May 12 08:06:42.109: INFO: Pod "downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524809303s May 12 08:06:44.112: INFO: Pod "downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528246603s May 12 08:06:46.140: INFO: Pod "downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.555623715s STEP: Saw pod success May 12 08:06:46.140: INFO: Pod "downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:06:46.147: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:06:46.202: INFO: Waiting for pod downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:06:46.234: INFO: Pod downwardapi-volume-820d271a-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:06:46.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cw4kz" for this suite. May 12 08:06:55.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:06:55.138: INFO: namespace: e2e-tests-downward-api-cw4kz, resource: bindings, ignored listing per whitelist May 12 08:06:55.178: INFO: namespace e2e-tests-downward-api-cw4kz deletion completed in 8.575903868s • [SLOW TEST:17.100 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:06:55.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:07:04.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-txz2c" for this suite. May 12 08:07:15.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:07:15.183: INFO: namespace: e2e-tests-kubelet-test-txz2c, resource: bindings, ignored listing per whitelist May 12 08:07:15.225: INFO: namespace e2e-tests-kubelet-test-txz2c deletion completed in 10.29515462s • [SLOW TEST:20.047 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:07:15.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-97f1ffaf-9427-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:07:16.910: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-stdnp" to be "success or failure" May 12 08:07:17.176: INFO: Pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 266.37041ms May 12 08:07:19.179: INFO: Pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269237862s May 12 08:07:21.183: INFO: Pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272906042s May 12 08:07:23.733: INFO: Pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823199132s May 12 08:07:25.737: INFO: Pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.826612593s STEP: Saw pod success May 12 08:07:25.737: INFO: Pod "pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:07:25.739: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 08:07:26.781: INFO: Waiting for pod pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c to disappear May 12 08:07:26.906: INFO: Pod pod-projected-configmaps-9830d0cf-9427-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:07:26.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-stdnp" for this suite. May 12 08:07:35.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:07:35.052: INFO: namespace: e2e-tests-projected-stdnp, resource: bindings, ignored listing per whitelist May 12 08:07:35.166: INFO: namespace e2e-tests-projected-stdnp deletion completed in 8.256896379s • [SLOW TEST:19.941 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:07:35.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 08:07:47.637: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:47.643: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:07:49.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:49.788: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:07:51.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:51.647: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:07:53.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:53.647: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:07:55.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:55.648: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:07:57.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:57.648: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:07:59.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:07:59.647: INFO: Pod pod-with-poststart-http-hook still exists May 12 08:08:01.643: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 08:08:01.646: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:08:01.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-phbps" for this suite. May 12 08:08:33.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:08:33.692: INFO: namespace: e2e-tests-container-lifecycle-hook-phbps, resource: bindings, ignored listing per whitelist May 12 08:08:33.721: INFO: namespace e2e-tests-container-lifecycle-hook-phbps deletion completed in 32.070677276s • [SLOW TEST:58.554 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:08:33.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c643fa22-9427-11ea-bb6f-0242ac11001c STEP: Creating configMap with name cm-test-opt-upd-c643fa6c-9427-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c643fa22-9427-11ea-bb6f-0242ac11001c STEP: Updating configmap cm-test-opt-upd-c643fa6c-9427-11ea-bb6f-0242ac11001c STEP: Creating configMap with name cm-test-opt-create-c643fa8d-9427-11ea-bb6f-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:08:48.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mhb2l" for this suite. May 12 08:09:12.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:09:12.337: INFO: namespace: e2e-tests-projected-mhb2l, resource: bindings, ignored listing per whitelist May 12 08:09:12.370: INFO: namespace e2e-tests-projected-mhb2l deletion completed in 24.083058514s • [SLOW TEST:38.649 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:09:12.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-748c STEP: Creating a pod to test atomic-volume-subpath May 12 08:09:13.303: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-748c" in namespace "e2e-tests-subpath-sdhfq" to be "success or failure" May 12 08:09:13.453: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Pending", Reason="", readiness=false. Elapsed: 149.533296ms May 12 08:09:15.944: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.640944228s May 12 08:09:17.948: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.645313478s May 12 08:09:19.953: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65003137s May 12 08:09:21.957: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.654119291s May 12 08:09:23.960: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=true. Elapsed: 10.657324202s May 12 08:09:26.022: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 12.718771605s May 12 08:09:28.026: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 14.722479714s May 12 08:09:30.030: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 16.726787865s May 12 08:09:32.034: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 18.730361497s May 12 08:09:34.038: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 20.734690367s May 12 08:09:36.351: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 23.04820683s May 12 08:09:38.357: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 25.054172399s May 12 08:09:40.362: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 27.058757501s May 12 08:09:42.367: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Running", Reason="", readiness=false. Elapsed: 29.063408085s May 12 08:09:44.370: INFO: Pod "pod-subpath-test-downwardapi-748c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.066565382s STEP: Saw pod success May 12 08:09:44.370: INFO: Pod "pod-subpath-test-downwardapi-748c" satisfied condition "success or failure" May 12 08:09:44.372: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-748c container test-container-subpath-downwardapi-748c: STEP: delete the pod May 12 08:09:44.593: INFO: Waiting for pod pod-subpath-test-downwardapi-748c to disappear May 12 08:09:44.639: INFO: Pod pod-subpath-test-downwardapi-748c no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-748c May 12 08:09:44.639: INFO: Deleting pod "pod-subpath-test-downwardapi-748c" in namespace "e2e-tests-subpath-sdhfq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:09:44.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-sdhfq" for this suite. May 12 08:09:52.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:09:52.837: INFO: namespace: e2e-tests-subpath-sdhfq, resource: bindings, ignored listing per whitelist May 12 08:09:52.876: INFO: namespace e2e-tests-subpath-sdhfq deletion completed in 8.230287212s • [SLOW TEST:40.505 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:09:52.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 12 08:09:52.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m2ndn' May 12 08:09:56.488: INFO: stderr: "" May 12 08:09:56.488: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 12 08:09:57.492: INFO: Selector matched 1 pods for map[app:redis] May 12 08:09:57.492: INFO: Found 0 / 1 May 12 08:09:58.492: INFO: Selector matched 1 pods for map[app:redis] May 12 08:09:58.492: INFO: Found 0 / 1 May 12 08:09:59.492: INFO: Selector matched 1 pods for map[app:redis] May 12 08:09:59.492: INFO: Found 0 / 1 May 12 08:10:00.502: INFO: Selector matched 1 pods for map[app:redis] May 12 08:10:00.502: INFO: Found 0 / 1 May 12 08:10:01.492: INFO: Selector matched 1 pods for map[app:redis] May 12 08:10:01.493: INFO: Found 0 / 1 May 12 08:10:02.492: INFO: Selector matched 1 pods for map[app:redis] May 12 08:10:02.492: INFO: Found 1 / 1 May 12 08:10:02.492: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 12 08:10:02.495: INFO: Selector matched 1 pods for map[app:redis] May 12 08:10:02.495: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 08:10:02.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-md8hw --namespace=e2e-tests-kubectl-m2ndn -p {"metadata":{"annotations":{"x":"y"}}}' May 12 08:10:02.602: INFO: stderr: "" May 12 08:10:02.602: INFO: stdout: "pod/redis-master-md8hw patched\n" STEP: checking annotations May 12 08:10:03.242: INFO: Selector matched 1 pods for map[app:redis] May 12 08:10:03.242: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:10:03.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m2ndn" for this suite. May 12 08:10:27.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:10:27.460: INFO: namespace: e2e-tests-kubectl-m2ndn, resource: bindings, ignored listing per whitelist May 12 08:10:27.509: INFO: namespace e2e-tests-kubectl-m2ndn deletion completed in 24.261610785s • [SLOW TEST:34.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:10:27.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0a27de37-9428-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:10:27.904: INFO: Waiting up to 5m0s for pod "pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-hq2fz" to be "success or failure" May 12 08:10:27.909: INFO: Pod "pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.646819ms May 12 08:10:29.963: INFO: Pod "pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059329983s May 12 08:10:31.968: INFO: Pod "pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064785984s May 12 08:10:34.239: INFO: Pod "pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.335267032s STEP: Saw pod success May 12 08:10:34.239: INFO: Pod "pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:10:34.242: INFO: Trying to get logs from node hunter-worker pod pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 08:10:34.505: INFO: Waiting for pod pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c to disappear May 12 08:10:34.897: INFO: Pod pod-secrets-0a2b626b-9428-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:10:34.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hq2fz" for this suite. May 12 08:10:41.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:10:41.214: INFO: namespace: e2e-tests-secrets-hq2fz, resource: bindings, ignored listing per whitelist May 12 08:10:41.234: INFO: namespace e2e-tests-secrets-hq2fz deletion completed in 6.333386596s • [SLOW TEST:13.725 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:10:41.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 12 08:10:55.529: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:55.529: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:55.567043 6 log.go:172] (0xc0018fa2c0) (0xc002889540) Create stream I0512 08:10:55.567084 6 log.go:172] (0xc0018fa2c0) (0xc002889540) Stream added, broadcasting: 1 I0512 08:10:55.569872 6 log.go:172] (0xc0018fa2c0) Reply frame received for 1 I0512 08:10:55.569911 6 log.go:172] (0xc0018fa2c0) (0xc000113720) Create stream I0512 08:10:55.569922 6 log.go:172] (0xc0018fa2c0) (0xc000113720) Stream added, broadcasting: 3 I0512 08:10:55.571167 6 log.go:172] (0xc0018fa2c0) Reply frame received for 3 I0512 08:10:55.571206 6 log.go:172] (0xc0018fa2c0) (0xc0028895e0) Create stream I0512 08:10:55.571220 6 log.go:172] (0xc0018fa2c0) (0xc0028895e0) Stream added, broadcasting: 5 I0512 08:10:55.572160 6 log.go:172] (0xc0018fa2c0) Reply frame received for 5 I0512 08:10:55.636936 6 log.go:172] (0xc0018fa2c0) Data frame received for 5 I0512 08:10:55.636974 6 log.go:172] (0xc0018fa2c0) Data frame received for 3 I0512 08:10:55.636991 6 log.go:172] (0xc000113720) (3) Data frame handling I0512 08:10:55.637021 6 log.go:172] (0xc000113720) (3) Data frame sent I0512 08:10:55.637036 6 log.go:172] (0xc0018fa2c0) Data frame received for 3 I0512 08:10:55.637043 6 log.go:172] (0xc000113720) (3) Data frame handling I0512 08:10:55.637100 6 log.go:172] (0xc0028895e0) (5) Data frame handling I0512 08:10:55.638711 6 log.go:172] (0xc0018fa2c0) Data frame received for 1 I0512 08:10:55.638739 6 log.go:172] (0xc002889540) (1) Data frame handling I0512 08:10:55.638758 6 log.go:172] (0xc002889540) (1) Data frame sent I0512 08:10:55.638780 6 log.go:172] (0xc0018fa2c0) (0xc002889540) Stream removed, broadcasting: 1 I0512 08:10:55.638807 6 log.go:172] (0xc0018fa2c0) Go away received I0512 08:10:55.638892 6 log.go:172] (0xc0018fa2c0) (0xc002889540) Stream removed, broadcasting: 1 I0512 08:10:55.638910 6 log.go:172] (0xc0018fa2c0) (0xc000113720) Stream removed, broadcasting: 3 I0512 08:10:55.638921 6 log.go:172] (0xc0018fa2c0) (0xc0028895e0) Stream removed, broadcasting: 5 May 12 08:10:55.638: INFO: Exec stderr: "" May 12 08:10:55.638: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:55.638: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:55.670376 6 log.go:172] (0xc0005d9080) (0xc000cc40a0) Create stream I0512 08:10:55.670411 6 log.go:172] (0xc0005d9080) (0xc000cc40a0) Stream added, broadcasting: 1 I0512 08:10:55.672566 6 log.go:172] (0xc0005d9080) Reply frame received for 1 I0512 08:10:55.672598 6 log.go:172] (0xc0005d9080) (0xc000113860) Create stream I0512 08:10:55.672607 6 log.go:172] (0xc0005d9080) (0xc000113860) Stream added, broadcasting: 3 I0512 08:10:55.673492 6 log.go:172] (0xc0005d9080) Reply frame received for 3 I0512 08:10:55.673520 6 log.go:172] (0xc0005d9080) (0xc000cc4140) Create stream I0512 08:10:55.673538 6 log.go:172] (0xc0005d9080) (0xc000cc4140) Stream added, broadcasting: 5 I0512 08:10:55.674280 6 log.go:172] (0xc0005d9080) Reply frame received for 5 I0512 08:10:55.740382 6 log.go:172] (0xc0005d9080) Data frame received for 3 I0512 08:10:55.740425 6 log.go:172] (0xc000113860) (3) Data frame handling I0512 08:10:55.740433 6 log.go:172] (0xc000113860) (3) Data frame sent I0512 08:10:55.740454 6 log.go:172] (0xc0005d9080) Data frame received for 3 I0512 08:10:55.740464 6 log.go:172] (0xc000113860) (3) Data frame handling I0512 08:10:55.740482 6 log.go:172] (0xc0005d9080) Data frame received for 5 I0512 08:10:55.740490 6 log.go:172] (0xc000cc4140) (5) Data frame handling I0512 08:10:55.741693 6 log.go:172] (0xc0005d9080) Data frame received for 1 I0512 08:10:55.741713 6 log.go:172] (0xc000cc40a0) (1) Data frame handling I0512 08:10:55.741720 6 log.go:172] (0xc000cc40a0) (1) Data frame sent I0512 08:10:55.741728 6 log.go:172] (0xc0005d9080) (0xc000cc40a0) Stream removed, broadcasting: 1 I0512 08:10:55.741824 6 log.go:172] (0xc0005d9080) Go away received I0512 08:10:55.741887 6 log.go:172] (0xc0005d9080) (0xc000cc40a0) Stream removed, broadcasting: 1 I0512 08:10:55.741910 6 log.go:172] (0xc0005d9080) (0xc000113860) Stream removed, broadcasting: 3 I0512 08:10:55.741915 6 log.go:172] (0xc0005d9080) (0xc000cc4140) Stream removed, broadcasting: 5 May 12 08:10:55.741: INFO: Exec stderr: "" May 12 08:10:55.741: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:55.741: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:55.772855 6 log.go:172] (0xc0005d9600) (0xc000cc4500) Create stream I0512 08:10:55.772886 6 log.go:172] (0xc0005d9600) (0xc000cc4500) Stream added, broadcasting: 1 I0512 08:10:55.776093 6 log.go:172] (0xc0005d9600) Reply frame received for 1 I0512 08:10:55.776142 6 log.go:172] (0xc0005d9600) (0xc0001139a0) Create stream I0512 08:10:55.776155 6 log.go:172] (0xc0005d9600) (0xc0001139a0) Stream added, broadcasting: 3 I0512 08:10:55.777077 6 log.go:172] (0xc0005d9600) Reply frame received for 3 I0512 08:10:55.777098 6 log.go:172] (0xc0005d9600) (0xc00140f040) Create stream I0512 08:10:55.777231 6 log.go:172] (0xc0005d9600) (0xc00140f040) Stream added, broadcasting: 5 I0512 08:10:55.778360 6 log.go:172] (0xc0005d9600) Reply frame received for 5 I0512 08:10:55.831685 6 log.go:172] (0xc0005d9600) Data frame received for 3 I0512 08:10:55.831745 6 log.go:172] (0xc0001139a0) (3) Data frame handling I0512 08:10:55.831772 6 log.go:172] (0xc0001139a0) (3) Data frame sent I0512 08:10:55.831794 6 log.go:172] (0xc0005d9600) Data frame received for 3 I0512 08:10:55.831812 6 log.go:172] (0xc0001139a0) (3) Data frame handling I0512 08:10:55.831852 6 log.go:172] (0xc0005d9600) Data frame received for 5 I0512 08:10:55.831871 6 log.go:172] (0xc00140f040) (5) Data frame handling I0512 08:10:55.833568 6 log.go:172] (0xc0005d9600) Data frame received for 1 I0512 08:10:55.833611 6 log.go:172] (0xc000cc4500) (1) Data frame handling I0512 08:10:55.833656 6 log.go:172] (0xc000cc4500) (1) Data frame sent I0512 08:10:55.833757 6 log.go:172] (0xc0005d9600) (0xc000cc4500) Stream removed, broadcasting: 1 I0512 08:10:55.833886 6 log.go:172] (0xc0005d9600) Go away received I0512 08:10:55.833957 6 log.go:172] (0xc0005d9600) (0xc000cc4500) Stream removed, broadcasting: 1 I0512 08:10:55.833997 6 log.go:172] (0xc0005d9600) (0xc0001139a0) Stream removed, broadcasting: 3 I0512 08:10:55.834012 6 log.go:172] (0xc0005d9600) (0xc00140f040) Stream removed, broadcasting: 5 May 12 08:10:55.834: INFO: Exec stderr: "" May 12 08:10:55.834: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:55.834: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:55.860998 6 log.go:172] (0xc00177e2c0) (0xc00140f4a0) Create stream I0512 08:10:55.861041 6 log.go:172] (0xc00177e2c0) (0xc00140f4a0) Stream added, broadcasting: 1 I0512 08:10:55.863639 6 log.go:172] (0xc00177e2c0) Reply frame received for 1 I0512 08:10:55.863686 6 log.go:172] (0xc00177e2c0) (0xc001a67cc0) Create stream I0512 08:10:55.863697 6 log.go:172] (0xc00177e2c0) (0xc001a67cc0) Stream added, broadcasting: 3 I0512 08:10:55.864672 6 log.go:172] (0xc00177e2c0) Reply frame received for 3 I0512 08:10:55.864710 6 log.go:172] (0xc00177e2c0) (0xc00140f540) Create stream I0512 08:10:55.864720 6 log.go:172] (0xc00177e2c0) (0xc00140f540) Stream added, broadcasting: 5 I0512 08:10:55.865892 6 log.go:172] (0xc00177e2c0) Reply frame received for 5 I0512 08:10:55.934375 6 log.go:172] (0xc00177e2c0) Data frame received for 5 I0512 08:10:55.934404 6 log.go:172] (0xc00140f540) (5) Data frame handling I0512 08:10:55.934449 6 log.go:172] (0xc00177e2c0) Data frame received for 3 I0512 08:10:55.934461 6 log.go:172] (0xc001a67cc0) (3) Data frame handling I0512 08:10:55.934471 6 log.go:172] (0xc001a67cc0) (3) Data frame sent I0512 08:10:55.934481 6 log.go:172] (0xc00177e2c0) Data frame received for 3 I0512 08:10:55.934488 6 log.go:172] (0xc001a67cc0) (3) Data frame handling I0512 08:10:55.936279 6 log.go:172] (0xc00177e2c0) Data frame received for 1 I0512 08:10:55.936316 6 log.go:172] (0xc00140f4a0) (1) Data frame handling I0512 08:10:55.936335 6 log.go:172] (0xc00140f4a0) (1) Data frame sent I0512 08:10:55.936371 6 log.go:172] (0xc00177e2c0) (0xc00140f4a0) Stream removed, broadcasting: 1 I0512 08:10:55.936457 6 log.go:172] (0xc00177e2c0) Go away received I0512 08:10:55.936537 6 log.go:172] (0xc00177e2c0) (0xc00140f4a0) Stream removed, broadcasting: 1 I0512 08:10:55.936558 6 log.go:172] (0xc00177e2c0) (0xc001a67cc0) Stream removed, broadcasting: 3 I0512 08:10:55.936566 6 log.go:172] (0xc00177e2c0) (0xc00140f540) Stream removed, broadcasting: 5 May 12 08:10:55.936: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 12 08:10:55.936: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:55.936: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:55.967490 6 log.go:172] (0xc0009022c0) (0xc0012c8000) Create stream I0512 08:10:55.967522 6 log.go:172] (0xc0009022c0) (0xc0012c8000) Stream added, broadcasting: 1 I0512 08:10:55.969778 6 log.go:172] (0xc0009022c0) Reply frame received for 1 I0512 08:10:55.969808 6 log.go:172] (0xc0009022c0) (0xc000113ae0) Create stream I0512 08:10:55.969818 6 log.go:172] (0xc0009022c0) (0xc000113ae0) Stream added, broadcasting: 3 I0512 08:10:55.970615 6 log.go:172] (0xc0009022c0) Reply frame received for 3 I0512 08:10:55.970650 6 log.go:172] (0xc0009022c0) (0xc000113c20) Create stream I0512 08:10:55.970665 6 log.go:172] (0xc0009022c0) (0xc000113c20) Stream added, broadcasting: 5 I0512 08:10:55.971776 6 log.go:172] (0xc0009022c0) Reply frame received for 5 I0512 08:10:56.024034 6 log.go:172] (0xc0009022c0) Data frame received for 3 I0512 08:10:56.024075 6 log.go:172] (0xc000113ae0) (3) Data frame handling I0512 08:10:56.024087 6 log.go:172] (0xc000113ae0) (3) Data frame sent I0512 08:10:56.024098 6 log.go:172] (0xc0009022c0) Data frame received for 3 I0512 08:10:56.024110 6 log.go:172] (0xc000113ae0) (3) Data frame handling I0512 08:10:56.024131 6 log.go:172] (0xc0009022c0) Data frame received for 5 I0512 08:10:56.024139 6 log.go:172] (0xc000113c20) (5) Data frame handling I0512 08:10:56.025272 6 log.go:172] (0xc0009022c0) Data frame received for 1 I0512 08:10:56.025394 6 log.go:172] (0xc0012c8000) (1) Data frame handling I0512 08:10:56.025417 6 log.go:172] (0xc0012c8000) (1) Data frame sent I0512 08:10:56.025431 6 log.go:172] (0xc0009022c0) (0xc0012c8000) Stream removed, broadcasting: 1 I0512 08:10:56.025451 6 log.go:172] (0xc0009022c0) Go away received I0512 08:10:56.025623 6 log.go:172] (0xc0009022c0) (0xc0012c8000) Stream removed, broadcasting: 1 I0512 08:10:56.025659 6 log.go:172] (0xc0009022c0) (0xc000113ae0) Stream removed, broadcasting: 3 I0512 08:10:56.025672 6 log.go:172] (0xc0009022c0) (0xc000113c20) Stream removed, broadcasting: 5 May 12 08:10:56.025: INFO: Exec stderr: "" May 12 08:10:56.025: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:56.025: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:56.056529 6 log.go:172] (0xc000902790) (0xc0012c83c0) Create stream I0512 08:10:56.056553 6 log.go:172] (0xc000902790) (0xc0012c83c0) Stream added, broadcasting: 1 I0512 08:10:56.067239 6 log.go:172] (0xc000902790) Reply frame received for 1 I0512 08:10:56.067287 6 log.go:172] (0xc000902790) (0xc002552000) Create stream I0512 08:10:56.067297 6 log.go:172] (0xc000902790) (0xc002552000) Stream added, broadcasting: 3 I0512 08:10:56.068389 6 log.go:172] (0xc000902790) Reply frame received for 3 I0512 08:10:56.068427 6 log.go:172] (0xc000902790) (0xc002552140) Create stream I0512 08:10:56.068450 6 log.go:172] (0xc000902790) (0xc002552140) Stream added, broadcasting: 5 I0512 08:10:56.069702 6 log.go:172] (0xc000902790) Reply frame received for 5 I0512 08:10:56.121863 6 log.go:172] (0xc000902790) Data frame received for 5 I0512 08:10:56.121947 6 log.go:172] (0xc002552140) (5) Data frame handling I0512 08:10:56.122869 6 log.go:172] (0xc000902790) Data frame received for 3 I0512 08:10:56.122888 6 log.go:172] (0xc002552000) (3) Data frame handling I0512 08:10:56.122904 6 log.go:172] (0xc002552000) (3) Data frame sent I0512 08:10:56.122917 6 log.go:172] (0xc000902790) Data frame received for 3 I0512 08:10:56.122933 6 log.go:172] (0xc002552000) (3) Data frame handling I0512 08:10:56.124453 6 log.go:172] (0xc000902790) Data frame received for 1 I0512 08:10:56.124474 6 log.go:172] (0xc0012c83c0) (1) Data frame handling I0512 08:10:56.124492 6 log.go:172] (0xc0012c83c0) (1) Data frame sent I0512 08:10:56.124505 6 log.go:172] (0xc000902790) (0xc0012c83c0) Stream removed, broadcasting: 1 I0512 08:10:56.124603 6 log.go:172] (0xc000902790) (0xc0012c83c0) Stream removed, broadcasting: 1 I0512 08:10:56.124619 6 log.go:172] (0xc000902790) (0xc002552000) Stream removed, broadcasting: 3 I0512 08:10:56.124639 6 log.go:172] (0xc000902790) (0xc002552140) Stream removed, broadcasting: 5 May 12 08:10:56.124: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 12 08:10:56.124: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:56.124: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:56.125309 6 log.go:172] (0xc000902790) Go away received I0512 08:10:56.147658 6 log.go:172] (0xc0009022c0) (0xc002552500) Create stream I0512 08:10:56.147684 6 log.go:172] (0xc0009022c0) (0xc002552500) Stream added, broadcasting: 1 I0512 08:10:56.149508 6 log.go:172] (0xc0009022c0) Reply frame received for 1 I0512 08:10:56.149534 6 log.go:172] (0xc0009022c0) (0xc0025525a0) Create stream I0512 08:10:56.149542 6 log.go:172] (0xc0009022c0) (0xc0025525a0) Stream added, broadcasting: 3 I0512 08:10:56.150263 6 log.go:172] (0xc0009022c0) Reply frame received for 3 I0512 08:10:56.150292 6 log.go:172] (0xc0009022c0) (0xc002552640) Create stream I0512 08:10:56.150301 6 log.go:172] (0xc0009022c0) (0xc002552640) Stream added, broadcasting: 5 I0512 08:10:56.151126 6 log.go:172] (0xc0009022c0) Reply frame received for 5 I0512 08:10:56.200749 6 log.go:172] (0xc0009022c0) Data frame received for 5 I0512 08:10:56.200788 6 log.go:172] (0xc002552640) (5) Data frame handling I0512 08:10:56.200832 6 log.go:172] (0xc0009022c0) Data frame received for 3 I0512 08:10:56.200853 6 log.go:172] (0xc0025525a0) (3) Data frame handling I0512 08:10:56.200866 6 log.go:172] (0xc0025525a0) (3) Data frame sent I0512 08:10:56.200877 6 log.go:172] (0xc0009022c0) Data frame received for 3 I0512 08:10:56.200891 6 log.go:172] (0xc0025525a0) (3) Data frame handling I0512 08:10:56.202765 6 log.go:172] (0xc0009022c0) Data frame received for 1 I0512 08:10:56.202795 6 log.go:172] (0xc002552500) (1) Data frame handling I0512 08:10:56.202821 6 log.go:172] (0xc002552500) (1) Data frame sent I0512 08:10:56.202849 6 log.go:172] (0xc0009022c0) (0xc002552500) Stream removed, broadcasting: 1 I0512 08:10:56.202878 6 log.go:172] (0xc0009022c0) Go away received I0512 08:10:56.203063 6 log.go:172] (0xc0009022c0) (0xc002552500) Stream removed, broadcasting: 1 I0512 08:10:56.203083 6 log.go:172] (0xc0009022c0) (0xc0025525a0) Stream removed, broadcasting: 3 I0512 08:10:56.203094 6 log.go:172] (0xc0009022c0) (0xc002552640) Stream removed, broadcasting: 5 May 12 08:10:56.203: INFO: Exec stderr: "" May 12 08:10:56.203: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:56.203: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:56.230967 6 log.go:172] (0xc0011744d0) (0xc00242e1e0) Create stream I0512 08:10:56.230991 6 log.go:172] (0xc0011744d0) (0xc00242e1e0) Stream added, broadcasting: 1 I0512 08:10:56.232847 6 log.go:172] (0xc0011744d0) Reply frame received for 1 I0512 08:10:56.232887 6 log.go:172] (0xc0011744d0) (0xc0021a80a0) Create stream I0512 08:10:56.232905 6 log.go:172] (0xc0011744d0) (0xc0021a80a0) Stream added, broadcasting: 3 I0512 08:10:56.233974 6 log.go:172] (0xc0011744d0) Reply frame received for 3 I0512 08:10:56.234018 6 log.go:172] (0xc0011744d0) (0xc0023e8000) Create stream I0512 08:10:56.234027 6 log.go:172] (0xc0011744d0) (0xc0023e8000) Stream added, broadcasting: 5 I0512 08:10:56.234840 6 log.go:172] (0xc0011744d0) Reply frame received for 5 I0512 08:10:56.289700 6 log.go:172] (0xc0011744d0) Data frame received for 5 I0512 08:10:56.289748 6 log.go:172] (0xc0023e8000) (5) Data frame handling I0512 08:10:56.289787 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 08:10:56.289803 6 log.go:172] (0xc0021a80a0) (3) Data frame handling I0512 08:10:56.289817 6 log.go:172] (0xc0021a80a0) (3) Data frame sent I0512 08:10:56.289833 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 08:10:56.289846 6 log.go:172] (0xc0021a80a0) (3) Data frame handling I0512 08:10:56.291145 6 log.go:172] (0xc0011744d0) Data frame received for 1 I0512 08:10:56.291173 6 log.go:172] (0xc00242e1e0) (1) Data frame handling I0512 08:10:56.291188 6 log.go:172] (0xc00242e1e0) (1) Data frame sent I0512 08:10:56.291204 6 log.go:172] (0xc0011744d0) (0xc00242e1e0) Stream removed, broadcasting: 1 I0512 08:10:56.291314 6 log.go:172] (0xc0011744d0) Go away received I0512 08:10:56.291360 6 log.go:172] (0xc0011744d0) (0xc00242e1e0) Stream removed, broadcasting: 1 I0512 08:10:56.291383 6 log.go:172] (0xc0011744d0) (0xc0021a80a0) Stream removed, broadcasting: 3 I0512 08:10:56.291398 6 log.go:172] (0xc0011744d0) (0xc0023e8000) Stream removed, broadcasting: 5 May 12 08:10:56.291: INFO: Exec stderr: "" May 12 08:10:56.291: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:56.291: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:56.327084 6 log.go:172] (0xc001174a50) (0xc00242e3c0) Create stream I0512 08:10:56.327113 6 log.go:172] (0xc001174a50) (0xc00242e3c0) Stream added, broadcasting: 1 I0512 08:10:56.337592 6 log.go:172] (0xc001174a50) Reply frame received for 1 I0512 08:10:56.337671 6 log.go:172] (0xc001174a50) (0xc0014f8000) Create stream I0512 08:10:56.337690 6 log.go:172] (0xc001174a50) (0xc0014f8000) Stream added, broadcasting: 3 I0512 08:10:56.339249 6 log.go:172] (0xc001174a50) Reply frame received for 3 I0512 08:10:56.339372 6 log.go:172] (0xc001174a50) (0xc0025526e0) Create stream I0512 08:10:56.339421 6 log.go:172] (0xc001174a50) (0xc0025526e0) Stream added, broadcasting: 5 I0512 08:10:56.341710 6 log.go:172] (0xc001174a50) Reply frame received for 5 I0512 08:10:56.419233 6 log.go:172] (0xc001174a50) Data frame received for 5 I0512 08:10:56.419261 6 log.go:172] (0xc0025526e0) (5) Data frame handling I0512 08:10:56.419286 6 log.go:172] (0xc001174a50) Data frame received for 3 I0512 08:10:56.419308 6 log.go:172] (0xc0014f8000) (3) Data frame handling I0512 08:10:56.419325 6 log.go:172] (0xc0014f8000) (3) Data frame sent I0512 08:10:56.419335 6 log.go:172] (0xc001174a50) Data frame received for 3 I0512 08:10:56.419347 6 log.go:172] (0xc0014f8000) (3) Data frame handling I0512 08:10:56.420531 6 log.go:172] (0xc001174a50) Data frame received for 1 I0512 08:10:56.420556 6 log.go:172] (0xc00242e3c0) (1) Data frame handling I0512 08:10:56.420585 6 log.go:172] (0xc00242e3c0) (1) Data frame sent I0512 08:10:56.420602 6 log.go:172] (0xc001174a50) (0xc00242e3c0) Stream removed, broadcasting: 1 I0512 08:10:56.420619 6 log.go:172] (0xc001174a50) Go away received I0512 08:10:56.420684 6 log.go:172] (0xc001174a50) (0xc00242e3c0) Stream removed, broadcasting: 1 I0512 08:10:56.420700 6 log.go:172] (0xc001174a50) (0xc0014f8000) Stream removed, broadcasting: 3 I0512 08:10:56.420712 6 log.go:172] (0xc001174a50) (0xc0025526e0) Stream removed, broadcasting: 5 May 12 08:10:56.420: INFO: Exec stderr: "" May 12 08:10:56.420: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-9vjs4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:10:56.420: INFO: >>> kubeConfig: /root/.kube/config I0512 08:10:56.542657 6 log.go:172] (0xc001174dc0) (0xc00242e500) Create stream I0512 08:10:56.542711 6 log.go:172] (0xc001174dc0) (0xc00242e500) Stream added, broadcasting: 1 I0512 08:10:56.544398 6 log.go:172] (0xc001174dc0) Reply frame received for 1 I0512 08:10:56.544426 6 log.go:172] (0xc001174dc0) (0xc002552820) Create stream I0512 08:10:56.544439 6 log.go:172] (0xc001174dc0) (0xc002552820) Stream added, broadcasting: 3 I0512 08:10:56.545306 6 log.go:172] (0xc001174dc0) Reply frame received for 3 I0512 08:10:56.545335 6 log.go:172] (0xc001174dc0) (0xc0014f8140) Create stream I0512 08:10:56.545348 6 log.go:172] (0xc001174dc0) (0xc0014f8140) Stream added, broadcasting: 5 I0512 08:10:56.546203 6 log.go:172] (0xc001174dc0) Reply frame received for 5 I0512 08:10:56.613701 6 log.go:172] (0xc001174dc0) Data frame received for 5 I0512 08:10:56.613752 6 log.go:172] (0xc0014f8140) (5) Data frame handling I0512 08:10:56.613790 6 log.go:172] (0xc001174dc0) Data frame received for 3 I0512 08:10:56.613822 6 log.go:172] (0xc002552820) (3) Data frame handling I0512 08:10:56.613868 6 log.go:172] (0xc002552820) (3) Data frame sent I0512 08:10:56.613896 6 log.go:172] (0xc001174dc0) Data frame received for 3 I0512 08:10:56.613908 6 log.go:172] (0xc002552820) (3) Data frame handling I0512 08:10:56.615738 6 log.go:172] (0xc001174dc0) Data frame received for 1 I0512 08:10:56.615776 6 log.go:172] (0xc00242e500) (1) Data frame handling I0512 08:10:56.615802 6 log.go:172] (0xc00242e500) (1) Data frame sent I0512 08:10:56.615812 6 log.go:172] (0xc001174dc0) (0xc00242e500) Stream removed, broadcasting: 1 I0512 08:10:56.615821 6 log.go:172] (0xc001174dc0) Go away received I0512 08:10:56.615982 6 log.go:172] (0xc001174dc0) (0xc00242e500) Stream removed, broadcasting: 1 I0512 08:10:56.616003 6 log.go:172] (0xc001174dc0) (0xc002552820) Stream removed, broadcasting: 3 I0512 08:10:56.616016 6 log.go:172] (0xc001174dc0) (0xc0014f8140) Stream removed, broadcasting: 5 May 12 08:10:56.616: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:10:56.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-9vjs4" for this suite. May 12 08:11:46.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:11:46.848: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-9vjs4, resource: bindings, ignored listing per whitelist May 12 08:11:47.253: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-9vjs4 deletion completed in 50.633779035s • [SLOW TEST:66.018 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:11:47.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 08:11:48.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4nhxs,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nhxs/configmaps/e2e-watch-test-label-changed,UID:39f0d821-9428-11ea-99e8-0242ac110002,ResourceVersion:10122141,Generation:0,CreationTimestamp:2020-05-12 08:11:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 08:11:48.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4nhxs,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nhxs/configmaps/e2e-watch-test-label-changed,UID:39f0d821-9428-11ea-99e8-0242ac110002,ResourceVersion:10122143,Generation:0,CreationTimestamp:2020-05-12 08:11:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 08:11:48.231: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4nhxs,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nhxs/configmaps/e2e-watch-test-label-changed,UID:39f0d821-9428-11ea-99e8-0242ac110002,ResourceVersion:10122144,Generation:0,CreationTimestamp:2020-05-12 08:11:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 08:11:58.652: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4nhxs,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nhxs/configmaps/e2e-watch-test-label-changed,UID:39f0d821-9428-11ea-99e8-0242ac110002,ResourceVersion:10122165,Generation:0,CreationTimestamp:2020-05-12 08:11:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 08:11:58.652: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4nhxs,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nhxs/configmaps/e2e-watch-test-label-changed,UID:39f0d821-9428-11ea-99e8-0242ac110002,ResourceVersion:10122166,Generation:0,CreationTimestamp:2020-05-12 08:11:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 12 08:11:58.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4nhxs,SelfLink:/api/v1/namespaces/e2e-tests-watch-4nhxs/configmaps/e2e-watch-test-label-changed,UID:39f0d821-9428-11ea-99e8-0242ac110002,ResourceVersion:10122167,Generation:0,CreationTimestamp:2020-05-12 08:11:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:11:58.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4nhxs" for this suite. May 12 08:12:04.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:12:04.760: INFO: namespace: e2e-tests-watch-4nhxs, resource: bindings, ignored listing per whitelist May 12 08:12:04.920: INFO: namespace e2e-tests-watch-4nhxs deletion completed in 6.24520835s • [SLOW TEST:17.666 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:12:04.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:12:11.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-sn4nr" for this suite. May 12 08:12:55.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:12:55.260: INFO: namespace: e2e-tests-kubelet-test-sn4nr, resource: bindings, ignored listing per whitelist May 12 08:12:55.295: INFO: namespace e2e-tests-kubelet-test-sn4nr deletion completed in 44.108236946s • [SLOW TEST:50.375 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:12:55.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 12 08:12:55.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fq8q4' May 12 08:12:56.391: INFO: stderr: "" May 12 08:12:56.391: INFO: stdout: "pod/pause created\n" May 12 08:12:56.391: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 08:12:56.391: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-fq8q4" to be "running and ready" May 12 08:12:56.399: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027411ms May 12 08:12:58.403: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011983715s May 12 08:13:00.406: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015086113s May 12 08:13:00.406: INFO: Pod "pause" satisfied condition "running and ready" May 12 08:13:00.406: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 12 08:13:00.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-fq8q4' May 12 08:13:00.511: INFO: stderr: "" May 12 08:13:00.511: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 08:13:00.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fq8q4' May 12 08:13:00.615: INFO: stderr: "" May 12 08:13:00.615: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 08:13:00.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-fq8q4' May 12 08:13:00.730: INFO: stderr: "" May 12 08:13:00.730: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 08:13:00.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-fq8q4' May 12 08:13:00.834: INFO: stderr: "" May 12 08:13:00.834: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 12 08:13:00.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fq8q4' May 12 08:13:01.011: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:13:01.011: INFO: stdout: "pod \"pause\" force deleted\n" May 12 08:13:01.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-fq8q4' May 12 08:13:01.278: INFO: stderr: "No resources found.\n" May 12 08:13:01.278: INFO: stdout: "" May 12 08:13:01.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-fq8q4 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 08:13:01.376: INFO: stderr: "" May 12 08:13:01.376: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:13:01.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fq8q4" for this suite. May 12 08:13:07.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:13:07.540: INFO: namespace: e2e-tests-kubectl-fq8q4, resource: bindings, ignored listing per whitelist May 12 08:13:07.738: INFO: namespace e2e-tests-kubectl-fq8q4 deletion completed in 6.358443025s • [SLOW TEST:12.443 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:13:07.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-gf8l8/configmap-test-69b13bbe-9428-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:13:08.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-gf8l8" to be "success or failure" May 12 08:13:08.434: INFO: Pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 252.147205ms May 12 08:13:10.438: INFO: Pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25627423s May 12 08:13:12.576: INFO: Pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394624473s May 12 08:13:14.634: INFO: Pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.452220809s May 12 08:13:16.660: INFO: Pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.477963493s STEP: Saw pod success May 12 08:13:16.660: INFO: Pod "pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:13:16.663: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c container env-test: STEP: delete the pod May 12 08:13:16.847: INFO: Waiting for pod pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c to disappear May 12 08:13:16.897: INFO: Pod pod-configmaps-69b1a7a8-9428-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:13:16.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gf8l8" for this suite. May 12 08:13:22.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:13:23.045: INFO: namespace: e2e-tests-configmap-gf8l8, resource: bindings, ignored listing per whitelist May 12 08:13:23.050: INFO: namespace e2e-tests-configmap-gf8l8 deletion completed in 6.149863839s • [SLOW TEST:15.312 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:13:23.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-45sc STEP: Creating a pod to test atomic-volume-subpath May 12 08:13:23.239: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-45sc" in namespace "e2e-tests-subpath-45k26" to be "success or failure" May 12 08:13:23.255: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.727628ms May 12 08:13:25.259: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019714317s May 12 08:13:27.403: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163079978s May 12 08:13:29.406: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166753706s May 12 08:13:31.410: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170695627s May 12 08:13:33.414: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 10.174768344s May 12 08:13:35.418: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 12.178852998s May 12 08:13:37.528: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 14.28899203s May 12 08:13:39.533: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 16.293565914s May 12 08:13:41.537: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 18.297826156s May 12 08:13:43.542: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 20.302115204s May 12 08:13:45.546: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 22.306716918s May 12 08:13:47.550: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 24.310201279s May 12 08:13:49.554: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Running", Reason="", readiness=false. Elapsed: 26.314661506s May 12 08:13:51.558: INFO: Pod "pod-subpath-test-secret-45sc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.31871831s STEP: Saw pod success May 12 08:13:51.558: INFO: Pod "pod-subpath-test-secret-45sc" satisfied condition "success or failure" May 12 08:13:51.562: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-45sc container test-container-subpath-secret-45sc: STEP: delete the pod May 12 08:13:51.594: INFO: Waiting for pod pod-subpath-test-secret-45sc to disappear May 12 08:13:51.621: INFO: Pod pod-subpath-test-secret-45sc no longer exists STEP: Deleting pod pod-subpath-test-secret-45sc May 12 08:13:51.621: INFO: Deleting pod "pod-subpath-test-secret-45sc" in namespace "e2e-tests-subpath-45k26" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:13:51.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-45k26" for this suite. May 12 08:13:57.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:13:57.750: INFO: namespace: e2e-tests-subpath-45k26, resource: bindings, ignored listing per whitelist May 12 08:13:57.765: INFO: namespace e2e-tests-subpath-45k26 deletion completed in 6.13794082s • [SLOW TEST:34.715 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:13:57.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 12 08:13:58.379: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 08:13:58.444: INFO: Waiting for terminating namespaces to be deleted... May 12 08:13:58.447: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 12 08:13:58.454: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 12 08:13:58.454: INFO: Container kube-proxy ready: true, restart count 0 May 12 08:13:58.454: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:13:58.454: INFO: Container kindnet-cni ready: true, restart count 0 May 12 08:13:58.454: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 08:13:58.454: INFO: Container coredns ready: true, restart count 0 May 12 08:13:58.454: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 12 08:13:58.459: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:13:58.459: INFO: Container kindnet-cni ready: true, restart count 0 May 12 08:13:58.459: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 08:13:58.459: INFO: Container coredns ready: true, restart count 0 May 12 08:13:58.459: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:13:58.459: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8a1df654-9428-11ea-bb6f-0242ac11001c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8a1df654-9428-11ea-bb6f-0242ac11001c off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a1df654-9428-11ea-bb6f-0242ac11001c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:14:06.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-lqf8z" for this suite. May 12 08:14:24.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:14:24.858: INFO: namespace: e2e-tests-sched-pred-lqf8z, resource: bindings, ignored listing per whitelist May 12 08:14:24.883: INFO: namespace e2e-tests-sched-pred-lqf8z deletion completed in 18.106686806s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:27.117 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:14:24.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 12 08:14:25.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-689v7 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 12 08:14:30.157: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0512 08:14:30.079099 2161 log.go:172] (0xc0007784d0) (0xc000685c20) Create stream\nI0512 08:14:30.079176 2161 log.go:172] (0xc0007784d0) (0xc000685c20) Stream added, broadcasting: 1\nI0512 08:14:30.082378 2161 log.go:172] (0xc0007784d0) Reply frame received for 1\nI0512 08:14:30.082435 2161 log.go:172] (0xc0007784d0) (0xc0008a2000) Create stream\nI0512 08:14:30.082460 2161 log.go:172] (0xc0007784d0) (0xc0008a2000) Stream added, broadcasting: 3\nI0512 08:14:30.083419 2161 log.go:172] (0xc0007784d0) Reply frame received for 3\nI0512 08:14:30.083462 2161 log.go:172] (0xc0007784d0) (0xc0008a20a0) Create stream\nI0512 08:14:30.083474 2161 log.go:172] (0xc0007784d0) (0xc0008a20a0) Stream added, broadcasting: 5\nI0512 08:14:30.084449 2161 log.go:172] (0xc0007784d0) Reply frame received for 5\nI0512 08:14:30.084496 2161 log.go:172] (0xc0007784d0) (0xc0008a2140) Create stream\nI0512 08:14:30.084514 2161 log.go:172] (0xc0007784d0) (0xc0008a2140) Stream added, broadcasting: 7\nI0512 08:14:30.085740 2161 log.go:172] (0xc0007784d0) Reply frame received for 7\nI0512 08:14:30.085893 2161 log.go:172] (0xc0008a2000) (3) Writing data frame\nI0512 08:14:30.086016 2161 log.go:172] (0xc0008a2000) (3) Writing data frame\nI0512 08:14:30.086816 2161 log.go:172] (0xc0007784d0) Data frame received for 5\nI0512 08:14:30.086834 2161 log.go:172] (0xc0008a20a0) (5) Data frame handling\nI0512 08:14:30.086852 2161 log.go:172] (0xc0008a20a0) (5) Data frame sent\nI0512 08:14:30.087406 2161 log.go:172] (0xc0007784d0) Data frame received for 5\nI0512 08:14:30.087424 2161 log.go:172] (0xc0008a20a0) (5) Data frame handling\nI0512 08:14:30.087437 2161 log.go:172] (0xc0008a20a0) (5) Data frame sent\nI0512 08:14:30.128779 2161 log.go:172] (0xc0007784d0) Data frame received for 5\nI0512 08:14:30.128838 2161 log.go:172] (0xc0008a20a0) (5) Data frame handling\nI0512 08:14:30.128875 2161 log.go:172] (0xc0007784d0) Data frame received for 7\nI0512 08:14:30.128909 2161 log.go:172] (0xc0008a2140) (7) Data frame handling\nI0512 08:14:30.130029 2161 log.go:172] (0xc0007784d0) Data frame received for 1\nI0512 08:14:30.130074 2161 log.go:172] (0xc0007784d0) (0xc0008a2000) Stream removed, broadcasting: 3\nI0512 08:14:30.130106 2161 log.go:172] (0xc000685c20) (1) Data frame handling\nI0512 08:14:30.130120 2161 log.go:172] (0xc000685c20) (1) Data frame sent\nI0512 08:14:30.130132 2161 log.go:172] (0xc0007784d0) (0xc000685c20) Stream removed, broadcasting: 1\nI0512 08:14:30.130147 2161 log.go:172] (0xc0007784d0) Go away received\nI0512 08:14:30.130440 2161 log.go:172] (0xc0007784d0) (0xc000685c20) Stream removed, broadcasting: 1\nI0512 08:14:30.130550 2161 log.go:172] (0xc0007784d0) (0xc0008a2000) Stream removed, broadcasting: 3\nI0512 08:14:30.130585 2161 log.go:172] (0xc0007784d0) (0xc0008a20a0) Stream removed, broadcasting: 5\nI0512 08:14:30.130618 2161 log.go:172] (0xc0007784d0) (0xc0008a2140) Stream removed, broadcasting: 7\n" May 12 08:14:30.157: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:14:32.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-689v7" for this suite. May 12 08:14:38.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:14:38.250: INFO: namespace: e2e-tests-kubectl-689v7, resource: bindings, ignored listing per whitelist May 12 08:14:38.270: INFO: namespace e2e-tests-kubectl-689v7 deletion completed in 6.102105195s • [SLOW TEST:13.387 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:14:38.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-9f8f1eb1-9428-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:14:38.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-xtx6l" to be "success or failure" May 12 08:14:38.494: INFO: Pod "pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788608ms May 12 08:14:40.499: INFO: Pod "pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009121966s May 12 08:14:42.503: INFO: Pod "pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013452639s STEP: Saw pod success May 12 08:14:42.503: INFO: Pod "pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:14:42.507: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 08:14:42.747: INFO: Waiting for pod pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c to disappear May 12 08:14:42.757: INFO: Pod pod-configmaps-9f8fa86b-9428-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:14:42.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xtx6l" for this suite. May 12 08:14:48.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:14:48.809: INFO: namespace: e2e-tests-configmap-xtx6l, resource: bindings, ignored listing per whitelist May 12 08:14:48.861: INFO: namespace e2e-tests-configmap-xtx6l deletion completed in 6.100704904s • [SLOW TEST:10.592 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:14:48.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:14:49.065: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 12 08:14:49.071: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qrdlp/daemonsets","resourceVersion":"10122723"},"items":null} May 12 08:14:49.073: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qrdlp/pods","resourceVersion":"10122723"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:14:49.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-qrdlp" for this suite. May 12 08:14:55.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:14:55.230: INFO: namespace: e2e-tests-daemonsets-qrdlp, resource: bindings, ignored listing per whitelist May 12 08:14:55.238: INFO: namespace e2e-tests-daemonsets-qrdlp deletion completed in 6.15743114s S [SKIPPING] [6.377 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:14:49.065: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:14:55.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 08:14:55.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5x898' May 12 08:14:55.438: INFO: stderr: "" May 12 08:14:55.438: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 12 08:14:55.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-5x898' May 12 08:15:01.147: INFO: stderr: "" May 12 08:15:01.147: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:15:01.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5x898" for this suite. May 12 08:15:07.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:15:07.170: INFO: namespace: e2e-tests-kubectl-5x898, resource: bindings, ignored listing per whitelist May 12 08:15:07.231: INFO: namespace e2e-tests-kubectl-5x898 deletion completed in 6.079760309s • [SLOW TEST:11.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:15:07.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 08:15:07.552: INFO: Waiting up to 5m0s for pod "pod-b0d80e02-9428-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-qdlrz" to be "success or failure" May 12 08:15:07.609: INFO: Pod "pod-b0d80e02-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.563633ms May 12 08:15:09.612: INFO: Pod "pod-b0d80e02-9428-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059836254s May 12 08:15:11.616: INFO: Pod "pod-b0d80e02-9428-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063651476s STEP: Saw pod success May 12 08:15:11.616: INFO: Pod "pod-b0d80e02-9428-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:15:11.619: INFO: Trying to get logs from node hunter-worker2 pod pod-b0d80e02-9428-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:15:11.677: INFO: Waiting for pod pod-b0d80e02-9428-11ea-bb6f-0242ac11001c to disappear May 12 08:15:11.692: INFO: Pod pod-b0d80e02-9428-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:15:11.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qdlrz" for this suite. May 12 08:15:17.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:15:17.786: INFO: namespace: e2e-tests-emptydir-qdlrz, resource: bindings, ignored listing per whitelist May 12 08:15:17.810: INFO: namespace e2e-tests-emptydir-qdlrz deletion completed in 6.11477338s • [SLOW TEST:10.579 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:15:17.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-84wh2 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 12 08:15:18.154: INFO: Found 0 stateful pods, waiting for 3 May 12 08:15:28.158: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 08:15:28.158: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 08:15:28.158: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 08:15:38.159: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 08:15:38.159: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 08:15:38.159: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 08:15:38.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-84wh2 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:15:38.762: INFO: stderr: "I0512 08:15:38.398025 2227 log.go:172] (0xc000138840) (0xc0005d34a0) Create stream\nI0512 08:15:38.398091 2227 log.go:172] (0xc000138840) (0xc0005d34a0) Stream added, broadcasting: 1\nI0512 08:15:38.400203 2227 log.go:172] (0xc000138840) Reply frame received for 1\nI0512 08:15:38.400237 2227 log.go:172] (0xc000138840) (0xc000756000) Create stream\nI0512 08:15:38.400249 2227 log.go:172] (0xc000138840) (0xc000756000) Stream added, broadcasting: 3\nI0512 08:15:38.401304 2227 log.go:172] (0xc000138840) Reply frame received for 3\nI0512 08:15:38.401371 2227 log.go:172] (0xc000138840) (0xc0005d3540) Create stream\nI0512 08:15:38.401393 2227 log.go:172] (0xc000138840) (0xc0005d3540) Stream added, broadcasting: 5\nI0512 08:15:38.402459 2227 log.go:172] (0xc000138840) Reply frame received for 5\nI0512 08:15:38.757345 2227 log.go:172] (0xc000138840) Data frame received for 3\nI0512 08:15:38.757368 2227 log.go:172] (0xc000756000) (3) Data frame handling\nI0512 08:15:38.757375 2227 log.go:172] (0xc000756000) (3) Data frame sent\nI0512 08:15:38.757531 2227 log.go:172] (0xc000138840) Data frame received for 5\nI0512 08:15:38.757557 2227 log.go:172] (0xc0005d3540) (5) Data frame handling\nI0512 08:15:38.757882 2227 log.go:172] (0xc000138840) Data frame received for 3\nI0512 08:15:38.757897 2227 log.go:172] (0xc000756000) (3) Data frame handling\nI0512 08:15:38.759433 2227 log.go:172] (0xc000138840) Data frame received for 1\nI0512 08:15:38.759454 2227 log.go:172] (0xc0005d34a0) (1) Data frame handling\nI0512 08:15:38.759467 2227 log.go:172] (0xc0005d34a0) (1) Data frame sent\nI0512 08:15:38.759485 2227 log.go:172] (0xc000138840) (0xc0005d34a0) Stream removed, broadcasting: 1\nI0512 08:15:38.759496 2227 log.go:172] (0xc000138840) Go away received\nI0512 08:15:38.759673 2227 log.go:172] (0xc000138840) (0xc0005d34a0) Stream removed, broadcasting: 1\nI0512 08:15:38.759699 2227 log.go:172] (0xc000138840) (0xc000756000) Stream removed, broadcasting: 3\nI0512 08:15:38.759708 2227 log.go:172] (0xc000138840) (0xc0005d3540) Stream removed, broadcasting: 5\n" May 12 08:15:38.762: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:15:38.762: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 12 08:15:48.894: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 08:15:59.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-84wh2 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:15:59.434: INFO: stderr: "I0512 08:15:59.345847 2250 log.go:172] (0xc000138630) (0xc000976320) Create stream\nI0512 08:15:59.345896 2250 log.go:172] (0xc000138630) (0xc000976320) Stream added, broadcasting: 1\nI0512 08:15:59.347640 2250 log.go:172] (0xc000138630) Reply frame received for 1\nI0512 08:15:59.347679 2250 log.go:172] (0xc000138630) (0xc0003ef180) Create stream\nI0512 08:15:59.347705 2250 log.go:172] (0xc000138630) (0xc0003ef180) Stream added, broadcasting: 3\nI0512 08:15:59.349387 2250 log.go:172] (0xc000138630) Reply frame received for 3\nI0512 08:15:59.349848 2250 log.go:172] (0xc000138630) (0xc000692000) Create stream\nI0512 08:15:59.349883 2250 log.go:172] (0xc000138630) (0xc000692000) Stream added, broadcasting: 5\nI0512 08:15:59.350726 2250 log.go:172] (0xc000138630) Reply frame received for 5\nI0512 08:15:59.428325 2250 log.go:172] (0xc000138630) Data frame received for 5\nI0512 08:15:59.428362 2250 log.go:172] (0xc000692000) (5) Data frame handling\nI0512 08:15:59.428391 2250 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:15:59.428406 2250 log.go:172] (0xc0003ef180) (3) Data frame handling\nI0512 08:15:59.428418 2250 log.go:172] (0xc0003ef180) (3) Data frame sent\nI0512 08:15:59.428433 2250 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:15:59.428445 2250 log.go:172] (0xc0003ef180) (3) Data frame handling\nI0512 08:15:59.430003 2250 log.go:172] (0xc000138630) Data frame received for 1\nI0512 08:15:59.430068 2250 log.go:172] (0xc000976320) (1) Data frame handling\nI0512 08:15:59.430090 2250 log.go:172] (0xc000976320) (1) Data frame sent\nI0512 08:15:59.430102 2250 log.go:172] (0xc000138630) (0xc000976320) Stream removed, broadcasting: 1\nI0512 08:15:59.430117 2250 log.go:172] (0xc000138630) Go away received\nI0512 08:15:59.430416 2250 log.go:172] (0xc000138630) (0xc000976320) Stream removed, broadcasting: 1\nI0512 08:15:59.430442 2250 log.go:172] (0xc000138630) (0xc0003ef180) Stream removed, broadcasting: 3\nI0512 08:15:59.430452 2250 log.go:172] (0xc000138630) (0xc000692000) Stream removed, broadcasting: 5\n" May 12 08:15:59.434: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:15:59.434: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:16:10.012: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:16:10.012: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 08:16:10.012: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 08:16:20.020: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:16:20.020: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 12 08:16:30.018: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:16:30.018: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 12 08:16:40.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-84wh2 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:16:40.316: INFO: stderr: "I0512 08:16:40.142679 2273 log.go:172] (0xc000138630) (0xc0007b3360) Create stream\nI0512 08:16:40.142744 2273 log.go:172] (0xc000138630) (0xc0007b3360) Stream added, broadcasting: 1\nI0512 08:16:40.144730 2273 log.go:172] (0xc000138630) Reply frame received for 1\nI0512 08:16:40.144770 2273 log.go:172] (0xc000138630) (0xc00048e000) Create stream\nI0512 08:16:40.144780 2273 log.go:172] (0xc000138630) (0xc00048e000) Stream added, broadcasting: 3\nI0512 08:16:40.145860 2273 log.go:172] (0xc000138630) Reply frame received for 3\nI0512 08:16:40.145907 2273 log.go:172] (0xc000138630) (0xc000676000) Create stream\nI0512 08:16:40.145918 2273 log.go:172] (0xc000138630) (0xc000676000) Stream added, broadcasting: 5\nI0512 08:16:40.146846 2273 log.go:172] (0xc000138630) Reply frame received for 5\nI0512 08:16:40.310296 2273 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:16:40.310325 2273 log.go:172] (0xc00048e000) (3) Data frame handling\nI0512 08:16:40.310338 2273 log.go:172] (0xc00048e000) (3) Data frame sent\nI0512 08:16:40.310708 2273 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:16:40.310729 2273 log.go:172] (0xc000138630) Data frame received for 5\nI0512 08:16:40.310749 2273 log.go:172] (0xc000676000) (5) Data frame handling\nI0512 08:16:40.310780 2273 log.go:172] (0xc00048e000) (3) Data frame handling\nI0512 08:16:40.313106 2273 log.go:172] (0xc000138630) Data frame received for 1\nI0512 08:16:40.313312 2273 log.go:172] (0xc0007b3360) (1) Data frame handling\nI0512 08:16:40.313342 2273 log.go:172] (0xc0007b3360) (1) Data frame sent\nI0512 08:16:40.313353 2273 log.go:172] (0xc000138630) (0xc0007b3360) Stream removed, broadcasting: 1\nI0512 08:16:40.313363 2273 log.go:172] (0xc000138630) Go away received\nI0512 08:16:40.313583 2273 log.go:172] (0xc000138630) (0xc0007b3360) Stream removed, broadcasting: 1\nI0512 08:16:40.313607 2273 log.go:172] (0xc000138630) (0xc00048e000) Stream removed, broadcasting: 3\nI0512 08:16:40.313621 2273 log.go:172] (0xc000138630) (0xc000676000) Stream removed, broadcasting: 5\n" May 12 08:16:40.316: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:16:40.316: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:16:50.343: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 08:17:00.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-84wh2 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:17:00.688: INFO: stderr: "I0512 08:17:00.628347 2295 log.go:172] (0xc0006cc0b0) (0xc0007a75e0) Create stream\nI0512 08:17:00.628397 2295 log.go:172] (0xc0006cc0b0) (0xc0007a75e0) Stream added, broadcasting: 1\nI0512 08:17:00.630731 2295 log.go:172] (0xc0006cc0b0) Reply frame received for 1\nI0512 08:17:00.630764 2295 log.go:172] (0xc0006cc0b0) (0xc0001fe000) Create stream\nI0512 08:17:00.630774 2295 log.go:172] (0xc0006cc0b0) (0xc0001fe000) Stream added, broadcasting: 3\nI0512 08:17:00.631413 2295 log.go:172] (0xc0006cc0b0) Reply frame received for 3\nI0512 08:17:00.631436 2295 log.go:172] (0xc0006cc0b0) (0xc0007a7680) Create stream\nI0512 08:17:00.631447 2295 log.go:172] (0xc0006cc0b0) (0xc0007a7680) Stream added, broadcasting: 5\nI0512 08:17:00.632112 2295 log.go:172] (0xc0006cc0b0) Reply frame received for 5\nI0512 08:17:00.683213 2295 log.go:172] (0xc0006cc0b0) Data frame received for 5\nI0512 08:17:00.683238 2295 log.go:172] (0xc0007a7680) (5) Data frame handling\nI0512 08:17:00.683266 2295 log.go:172] (0xc0006cc0b0) Data frame received for 3\nI0512 08:17:00.683273 2295 log.go:172] (0xc0001fe000) (3) Data frame handling\nI0512 08:17:00.683289 2295 log.go:172] (0xc0001fe000) (3) Data frame sent\nI0512 08:17:00.683298 2295 log.go:172] (0xc0006cc0b0) Data frame received for 3\nI0512 08:17:00.683304 2295 log.go:172] (0xc0001fe000) (3) Data frame handling\nI0512 08:17:00.684427 2295 log.go:172] (0xc0006cc0b0) Data frame received for 1\nI0512 08:17:00.684444 2295 log.go:172] (0xc0007a75e0) (1) Data frame handling\nI0512 08:17:00.684452 2295 log.go:172] (0xc0007a75e0) (1) Data frame sent\nI0512 08:17:00.684463 2295 log.go:172] (0xc0006cc0b0) (0xc0007a75e0) Stream removed, broadcasting: 1\nI0512 08:17:00.684592 2295 log.go:172] (0xc0006cc0b0) (0xc0007a75e0) Stream removed, broadcasting: 1\nI0512 08:17:00.684605 2295 log.go:172] (0xc0006cc0b0) (0xc0001fe000) Stream removed, broadcasting: 3\nI0512 08:17:00.684611 2295 log.go:172] (0xc0006cc0b0) (0xc0007a7680) Stream removed, broadcasting: 5\n" May 12 08:17:00.688: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:17:00.688: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:17:10.703: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:17:10.703: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 08:17:10.703: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 08:17:21.010: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:17:21.010: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 08:17:21.010: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 08:17:30.733: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:17:30.733: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 12 08:17:40.713: INFO: Waiting for StatefulSet e2e-tests-statefulset-84wh2/ss2 to complete update May 12 08:17:40.713: INFO: Waiting for Pod e2e-tests-statefulset-84wh2/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 08:17:50.709: INFO: Deleting all statefulset in ns e2e-tests-statefulset-84wh2 May 12 08:17:50.711: INFO: Scaling statefulset ss2 to 0 May 12 08:18:20.768: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:18:20.770: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:18:20.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-84wh2" for this suite. May 12 08:18:36.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:18:36.878: INFO: namespace: e2e-tests-statefulset-84wh2, resource: bindings, ignored listing per whitelist May 12 08:18:37.124: INFO: namespace e2e-tests-statefulset-84wh2 deletion completed in 16.318162409s • [SLOW TEST:199.314 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:18:37.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:18:37.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-gx5fn" to be "success or failure" May 12 08:18:37.675: INFO: Pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.637656ms May 12 08:18:39.803: INFO: Pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149462922s May 12 08:18:41.807: INFO: Pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153676583s May 12 08:18:43.811: INFO: Pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.157957574s May 12 08:18:45.815: INFO: Pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162158443s STEP: Saw pod success May 12 08:18:45.816: INFO: Pod "downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:18:45.819: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:18:46.277: INFO: Waiting for pod downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c to disappear May 12 08:18:46.287: INFO: Pod downwardapi-volume-2e1da704-9429-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:18:46.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gx5fn" for this suite. May 12 08:18:52.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:18:52.414: INFO: namespace: e2e-tests-projected-gx5fn, resource: bindings, ignored listing per whitelist May 12 08:18:52.480: INFO: namespace e2e-tests-projected-gx5fn deletion completed in 6.18951196s • [SLOW TEST:15.355 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:18:52.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-371fe7b4-9429-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:18:52.837: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-fhk89" to be "success or failure" May 12 08:18:52.868: INFO: Pod "pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.375712ms May 12 08:18:54.872: INFO: Pod "pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034159985s May 12 08:18:56.875: INFO: Pod "pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03755847s STEP: Saw pod success May 12 08:18:56.875: INFO: Pod "pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:18:56.877: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c container projected-configmap-volume-test: STEP: delete the pod May 12 08:18:57.219: INFO: Waiting for pod pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c to disappear May 12 08:18:57.406: INFO: Pod pod-projected-configmaps-3728bdc1-9429-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:18:57.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fhk89" for this suite. May 12 08:19:09.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:19:10.729: INFO: namespace: e2e-tests-projected-fhk89, resource: bindings, ignored listing per whitelist May 12 08:19:10.759: INFO: namespace e2e-tests-projected-fhk89 deletion completed in 13.350159442s • [SLOW TEST:18.279 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:19:10.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hv6wt [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-hv6wt STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-hv6wt May 12 08:19:12.395: INFO: Found 0 stateful pods, waiting for 1 May 12 08:19:22.640: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 12 08:19:22.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:19:23.227: INFO: stderr: "I0512 08:19:22.757496 2317 log.go:172] (0xc000154840) (0xc0006752c0) Create stream\nI0512 08:19:22.757542 2317 log.go:172] (0xc000154840) (0xc0006752c0) Stream added, broadcasting: 1\nI0512 08:19:22.759308 2317 log.go:172] (0xc000154840) Reply frame received for 1\nI0512 08:19:22.759333 2317 log.go:172] (0xc000154840) (0xc00070e000) Create stream\nI0512 08:19:22.759345 2317 log.go:172] (0xc000154840) (0xc00070e000) Stream added, broadcasting: 3\nI0512 08:19:22.760019 2317 log.go:172] (0xc000154840) Reply frame received for 3\nI0512 08:19:22.760045 2317 log.go:172] (0xc000154840) (0xc00070e140) Create stream\nI0512 08:19:22.760053 2317 log.go:172] (0xc000154840) (0xc00070e140) Stream added, broadcasting: 5\nI0512 08:19:22.760760 2317 log.go:172] (0xc000154840) Reply frame received for 5\nI0512 08:19:23.222313 2317 log.go:172] (0xc000154840) Data frame received for 3\nI0512 08:19:23.222350 2317 log.go:172] (0xc00070e000) (3) Data frame handling\nI0512 08:19:23.222369 2317 log.go:172] (0xc00070e000) (3) Data frame sent\nI0512 08:19:23.222427 2317 log.go:172] (0xc000154840) Data frame received for 3\nI0512 08:19:23.222437 2317 log.go:172] (0xc00070e000) (3) Data frame handling\nI0512 08:19:23.222647 2317 log.go:172] (0xc000154840) Data frame received for 5\nI0512 08:19:23.222663 2317 log.go:172] (0xc00070e140) (5) Data frame handling\nI0512 08:19:23.224069 2317 log.go:172] (0xc000154840) Data frame received for 1\nI0512 08:19:23.224086 2317 log.go:172] (0xc0006752c0) (1) Data frame handling\nI0512 08:19:23.224102 2317 log.go:172] (0xc0006752c0) (1) Data frame sent\nI0512 08:19:23.224220 2317 log.go:172] (0xc000154840) (0xc0006752c0) Stream removed, broadcasting: 1\nI0512 08:19:23.224239 2317 log.go:172] (0xc000154840) Go away received\nI0512 08:19:23.224463 2317 log.go:172] (0xc000154840) (0xc0006752c0) Stream removed, broadcasting: 1\nI0512 08:19:23.224484 2317 log.go:172] (0xc000154840) (0xc00070e000) Stream removed, broadcasting: 3\nI0512 08:19:23.224499 2317 log.go:172] (0xc000154840) (0xc00070e140) Stream removed, broadcasting: 5\n" May 12 08:19:23.227: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:19:23.227: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:19:23.402: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 08:19:33.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 08:19:33.509: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:19:33.587: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:19:33.587: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC }] May 12 08:19:33.587: INFO: May 12 08:19:33.587: INFO: StatefulSet ss has not reached scale 3, at 1 May 12 08:19:34.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.930049676s May 12 08:19:35.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9250676s May 12 08:19:36.893: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.874205898s May 12 08:19:38.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.623827025s May 12 08:19:39.187: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.450618922s May 12 08:19:40.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.330024349s May 12 08:19:41.325: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.281380845s May 12 08:19:42.654: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.19166462s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-hv6wt May 12 08:19:43.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:19:43.906: INFO: stderr: "I0512 08:19:43.824306 2339 log.go:172] (0xc00014c840) (0xc000758640) Create stream\nI0512 08:19:43.824374 2339 log.go:172] (0xc00014c840) (0xc000758640) Stream added, broadcasting: 1\nI0512 08:19:43.827636 2339 log.go:172] (0xc00014c840) Reply frame received for 1\nI0512 08:19:43.827670 2339 log.go:172] (0xc00014c840) (0xc000684dc0) Create stream\nI0512 08:19:43.827677 2339 log.go:172] (0xc00014c840) (0xc000684dc0) Stream added, broadcasting: 3\nI0512 08:19:43.828377 2339 log.go:172] (0xc00014c840) Reply frame received for 3\nI0512 08:19:43.828411 2339 log.go:172] (0xc00014c840) (0xc00079e000) Create stream\nI0512 08:19:43.828424 2339 log.go:172] (0xc00014c840) (0xc00079e000) Stream added, broadcasting: 5\nI0512 08:19:43.829825 2339 log.go:172] (0xc00014c840) Reply frame received for 5\nI0512 08:19:43.900358 2339 log.go:172] (0xc00014c840) Data frame received for 5\nI0512 08:19:43.900464 2339 log.go:172] (0xc00079e000) (5) Data frame handling\nI0512 08:19:43.900498 2339 log.go:172] (0xc00014c840) Data frame received for 3\nI0512 08:19:43.900515 2339 log.go:172] (0xc000684dc0) (3) Data frame handling\nI0512 08:19:43.900528 2339 log.go:172] (0xc000684dc0) (3) Data frame sent\nI0512 08:19:43.900538 2339 log.go:172] (0xc00014c840) Data frame received for 3\nI0512 08:19:43.900566 2339 log.go:172] (0xc000684dc0) (3) Data frame handling\nI0512 08:19:43.901701 2339 log.go:172] (0xc00014c840) Data frame received for 1\nI0512 08:19:43.901719 2339 log.go:172] (0xc000758640) (1) Data frame handling\nI0512 08:19:43.901735 2339 log.go:172] (0xc000758640) (1) Data frame sent\nI0512 08:19:43.901747 2339 log.go:172] (0xc00014c840) (0xc000758640) Stream removed, broadcasting: 1\nI0512 08:19:43.901758 2339 log.go:172] (0xc00014c840) Go away received\nI0512 08:19:43.902017 2339 log.go:172] (0xc00014c840) (0xc000758640) Stream removed, broadcasting: 1\nI0512 08:19:43.902037 2339 log.go:172] (0xc00014c840) (0xc000684dc0) Stream removed, broadcasting: 3\nI0512 08:19:43.902048 2339 log.go:172] (0xc00014c840) (0xc00079e000) Stream removed, broadcasting: 5\n" May 12 08:19:43.906: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:19:43.906: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:19:43.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:19:44.111: INFO: stderr: "I0512 08:19:44.030968 2361 log.go:172] (0xc0007d0bb0) (0xc0007b9680) Create stream\nI0512 08:19:44.031042 2361 log.go:172] (0xc0007d0bb0) (0xc0007b9680) Stream added, broadcasting: 1\nI0512 08:19:44.041989 2361 log.go:172] (0xc0007d0bb0) Reply frame received for 1\nI0512 08:19:44.042060 2361 log.go:172] (0xc0007d0bb0) (0xc0007b8b40) Create stream\nI0512 08:19:44.042072 2361 log.go:172] (0xc0007d0bb0) (0xc0007b8b40) Stream added, broadcasting: 3\nI0512 08:19:44.044975 2361 log.go:172] (0xc0007d0bb0) Reply frame received for 3\nI0512 08:19:44.045008 2361 log.go:172] (0xc0007d0bb0) (0xc0007b8c80) Create stream\nI0512 08:19:44.045019 2361 log.go:172] (0xc0007d0bb0) (0xc0007b8c80) Stream added, broadcasting: 5\nI0512 08:19:44.046606 2361 log.go:172] (0xc0007d0bb0) Reply frame received for 5\nI0512 08:19:44.105547 2361 log.go:172] (0xc0007d0bb0) Data frame received for 3\nI0512 08:19:44.105578 2361 log.go:172] (0xc0007b8b40) (3) Data frame handling\nI0512 08:19:44.105587 2361 log.go:172] (0xc0007b8b40) (3) Data frame sent\nI0512 08:19:44.105593 2361 log.go:172] (0xc0007d0bb0) Data frame received for 3\nI0512 08:19:44.105600 2361 log.go:172] (0xc0007b8b40) (3) Data frame handling\nI0512 08:19:44.105630 2361 log.go:172] (0xc0007d0bb0) Data frame received for 5\nI0512 08:19:44.105637 2361 log.go:172] (0xc0007b8c80) (5) Data frame handling\nI0512 08:19:44.105644 2361 log.go:172] (0xc0007b8c80) (5) Data frame sent\nI0512 08:19:44.105654 2361 log.go:172] (0xc0007d0bb0) Data frame received for 5\nI0512 08:19:44.105668 2361 log.go:172] (0xc0007b8c80) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0512 08:19:44.106652 2361 log.go:172] (0xc0007d0bb0) Data frame received for 1\nI0512 08:19:44.106673 2361 log.go:172] (0xc0007b9680) (1) Data frame handling\nI0512 08:19:44.106684 2361 log.go:172] (0xc0007b9680) (1) Data frame sent\nI0512 08:19:44.106697 2361 log.go:172] (0xc0007d0bb0) (0xc0007b9680) Stream removed, broadcasting: 1\nI0512 08:19:44.106724 2361 log.go:172] (0xc0007d0bb0) Go away received\nI0512 08:19:44.106952 2361 log.go:172] (0xc0007d0bb0) (0xc0007b9680) Stream removed, broadcasting: 1\nI0512 08:19:44.106965 2361 log.go:172] (0xc0007d0bb0) (0xc0007b8b40) Stream removed, broadcasting: 3\nI0512 08:19:44.106972 2361 log.go:172] (0xc0007d0bb0) (0xc0007b8c80) Stream removed, broadcasting: 5\n" May 12 08:19:44.111: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:19:44.111: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:19:44.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:19:44.310: INFO: stderr: "I0512 08:19:44.227347 2382 log.go:172] (0xc0007d82c0) (0xc000708780) Create stream\nI0512 08:19:44.227399 2382 log.go:172] (0xc0007d82c0) (0xc000708780) Stream added, broadcasting: 1\nI0512 08:19:44.230008 2382 log.go:172] (0xc0007d82c0) Reply frame received for 1\nI0512 08:19:44.230046 2382 log.go:172] (0xc0007d82c0) (0xc0007b05a0) Create stream\nI0512 08:19:44.230057 2382 log.go:172] (0xc0007d82c0) (0xc0007b05a0) Stream added, broadcasting: 3\nI0512 08:19:44.230842 2382 log.go:172] (0xc0007d82c0) Reply frame received for 3\nI0512 08:19:44.230874 2382 log.go:172] (0xc0007d82c0) (0xc0005f2d20) Create stream\nI0512 08:19:44.230895 2382 log.go:172] (0xc0007d82c0) (0xc0005f2d20) Stream added, broadcasting: 5\nI0512 08:19:44.231745 2382 log.go:172] (0xc0007d82c0) Reply frame received for 5\nI0512 08:19:44.304692 2382 log.go:172] (0xc0007d82c0) Data frame received for 3\nI0512 08:19:44.304723 2382 log.go:172] (0xc0007b05a0) (3) Data frame handling\nI0512 08:19:44.304747 2382 log.go:172] (0xc0007b05a0) (3) Data frame sent\nI0512 08:19:44.304761 2382 log.go:172] (0xc0007d82c0) Data frame received for 3\nI0512 08:19:44.304768 2382 log.go:172] (0xc0007b05a0) (3) Data frame handling\nI0512 08:19:44.304825 2382 log.go:172] (0xc0007d82c0) Data frame received for 5\nI0512 08:19:44.304856 2382 log.go:172] (0xc0005f2d20) (5) Data frame handling\nI0512 08:19:44.304876 2382 log.go:172] (0xc0005f2d20) (5) Data frame sent\nI0512 08:19:44.304886 2382 log.go:172] (0xc0007d82c0) Data frame received for 5\nI0512 08:19:44.304894 2382 log.go:172] (0xc0005f2d20) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0512 08:19:44.306632 2382 log.go:172] (0xc0007d82c0) Data frame received for 1\nI0512 08:19:44.306651 2382 log.go:172] (0xc000708780) (1) Data frame handling\nI0512 08:19:44.306658 2382 log.go:172] (0xc000708780) (1) Data frame sent\nI0512 08:19:44.306685 2382 log.go:172] (0xc0007d82c0) (0xc000708780) Stream removed, broadcasting: 1\nI0512 08:19:44.306700 2382 log.go:172] (0xc0007d82c0) Go away received\nI0512 08:19:44.306899 2382 log.go:172] (0xc0007d82c0) (0xc000708780) Stream removed, broadcasting: 1\nI0512 08:19:44.306916 2382 log.go:172] (0xc0007d82c0) (0xc0007b05a0) Stream removed, broadcasting: 3\nI0512 08:19:44.306924 2382 log.go:172] (0xc0007d82c0) (0xc0005f2d20) Stream removed, broadcasting: 5\n" May 12 08:19:44.310: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:19:44.310: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:19:44.314: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 12 08:19:54.711: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 08:19:54.711: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 08:19:54.711: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 12 08:19:54.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:19:54.951: INFO: stderr: "I0512 08:19:54.856828 2404 log.go:172] (0xc000702370) (0xc000623360) Create stream\nI0512 08:19:54.856866 2404 log.go:172] (0xc000702370) (0xc000623360) Stream added, broadcasting: 1\nI0512 08:19:54.859011 2404 log.go:172] (0xc000702370) Reply frame received for 1\nI0512 08:19:54.859049 2404 log.go:172] (0xc000702370) (0xc000542000) Create stream\nI0512 08:19:54.859061 2404 log.go:172] (0xc000702370) (0xc000542000) Stream added, broadcasting: 3\nI0512 08:19:54.862069 2404 log.go:172] (0xc000702370) Reply frame received for 3\nI0512 08:19:54.862095 2404 log.go:172] (0xc000702370) (0xc000623400) Create stream\nI0512 08:19:54.862105 2404 log.go:172] (0xc000702370) (0xc000623400) Stream added, broadcasting: 5\nI0512 08:19:54.862817 2404 log.go:172] (0xc000702370) Reply frame received for 5\nI0512 08:19:54.946463 2404 log.go:172] (0xc000702370) Data frame received for 3\nI0512 08:19:54.946517 2404 log.go:172] (0xc000542000) (3) Data frame handling\nI0512 08:19:54.946536 2404 log.go:172] (0xc000542000) (3) Data frame sent\nI0512 08:19:54.946551 2404 log.go:172] (0xc000702370) Data frame received for 3\nI0512 08:19:54.946568 2404 log.go:172] (0xc000542000) (3) Data frame handling\nI0512 08:19:54.946603 2404 log.go:172] (0xc000702370) Data frame received for 5\nI0512 08:19:54.946612 2404 log.go:172] (0xc000623400) (5) Data frame handling\nI0512 08:19:54.947559 2404 log.go:172] (0xc000702370) Data frame received for 1\nI0512 08:19:54.947584 2404 log.go:172] (0xc000623360) (1) Data frame handling\nI0512 08:19:54.947626 2404 log.go:172] (0xc000623360) (1) Data frame sent\nI0512 08:19:54.947657 2404 log.go:172] (0xc000702370) (0xc000623360) Stream removed, broadcasting: 1\nI0512 08:19:54.947820 2404 log.go:172] (0xc000702370) (0xc000623360) Stream removed, broadcasting: 1\nI0512 08:19:54.947834 2404 log.go:172] (0xc000702370) (0xc000542000) Stream removed, broadcasting: 3\nI0512 08:19:54.948021 2404 log.go:172] (0xc000702370) (0xc000623400) Stream removed, broadcasting: 5\n" May 12 08:19:54.952: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:19:54.952: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:19:54.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:19:55.278: INFO: stderr: "I0512 08:19:55.074690 2425 log.go:172] (0xc0001386e0) (0xc000689360) Create stream\nI0512 08:19:55.074749 2425 log.go:172] (0xc0001386e0) (0xc000689360) Stream added, broadcasting: 1\nI0512 08:19:55.076845 2425 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0512 08:19:55.076894 2425 log.go:172] (0xc0001386e0) (0xc0005e8000) Create stream\nI0512 08:19:55.076916 2425 log.go:172] (0xc0001386e0) (0xc0005e8000) Stream added, broadcasting: 3\nI0512 08:19:55.078246 2425 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0512 08:19:55.078290 2425 log.go:172] (0xc0001386e0) (0xc000686000) Create stream\nI0512 08:19:55.078306 2425 log.go:172] (0xc0001386e0) (0xc000686000) Stream added, broadcasting: 5\nI0512 08:19:55.079173 2425 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0512 08:19:55.270445 2425 log.go:172] (0xc0001386e0) Data frame received for 5\nI0512 08:19:55.270481 2425 log.go:172] (0xc000686000) (5) Data frame handling\nI0512 08:19:55.270513 2425 log.go:172] (0xc0001386e0) Data frame received for 3\nI0512 08:19:55.270544 2425 log.go:172] (0xc0005e8000) (3) Data frame handling\nI0512 08:19:55.270646 2425 log.go:172] (0xc0005e8000) (3) Data frame sent\nI0512 08:19:55.270671 2425 log.go:172] (0xc0001386e0) Data frame received for 3\nI0512 08:19:55.270686 2425 log.go:172] (0xc0005e8000) (3) Data frame handling\nI0512 08:19:55.272800 2425 log.go:172] (0xc0001386e0) Data frame received for 1\nI0512 08:19:55.272837 2425 log.go:172] (0xc000689360) (1) Data frame handling\nI0512 08:19:55.272845 2425 log.go:172] (0xc000689360) (1) Data frame sent\nI0512 08:19:55.272852 2425 log.go:172] (0xc0001386e0) (0xc000689360) Stream removed, broadcasting: 1\nI0512 08:19:55.272871 2425 log.go:172] (0xc0001386e0) Go away received\nI0512 08:19:55.273342 2425 log.go:172] (0xc0001386e0) (0xc000689360) Stream removed, broadcasting: 1\nI0512 08:19:55.273388 2425 log.go:172] (0xc0001386e0) (0xc0005e8000) Stream removed, broadcasting: 3\nI0512 08:19:55.273401 2425 log.go:172] (0xc0001386e0) (0xc000686000) Stream removed, broadcasting: 5\n" May 12 08:19:55.278: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:19:55.278: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:19:55.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hv6wt ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:19:55.768: INFO: stderr: "I0512 08:19:55.432825 2448 log.go:172] (0xc000138630) (0xc00084d7c0) Create stream\nI0512 08:19:55.432894 2448 log.go:172] (0xc000138630) (0xc00084d7c0) Stream added, broadcasting: 1\nI0512 08:19:55.437785 2448 log.go:172] (0xc000138630) Reply frame received for 1\nI0512 08:19:55.437821 2448 log.go:172] (0xc000138630) (0xc0003aa5a0) Create stream\nI0512 08:19:55.437831 2448 log.go:172] (0xc000138630) (0xc0003aa5a0) Stream added, broadcasting: 3\nI0512 08:19:55.438667 2448 log.go:172] (0xc000138630) Reply frame received for 3\nI0512 08:19:55.438693 2448 log.go:172] (0xc000138630) (0xc00059e000) Create stream\nI0512 08:19:55.438700 2448 log.go:172] (0xc000138630) (0xc00059e000) Stream added, broadcasting: 5\nI0512 08:19:55.439413 2448 log.go:172] (0xc000138630) Reply frame received for 5\nI0512 08:19:55.761707 2448 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:19:55.761754 2448 log.go:172] (0xc0003aa5a0) (3) Data frame handling\nI0512 08:19:55.761785 2448 log.go:172] (0xc0003aa5a0) (3) Data frame sent\nI0512 08:19:55.762153 2448 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:19:55.762186 2448 log.go:172] (0xc0003aa5a0) (3) Data frame handling\nI0512 08:19:55.762216 2448 log.go:172] (0xc000138630) Data frame received for 5\nI0512 08:19:55.762231 2448 log.go:172] (0xc00059e000) (5) Data frame handling\nI0512 08:19:55.764707 2448 log.go:172] (0xc000138630) Data frame received for 1\nI0512 08:19:55.764736 2448 log.go:172] (0xc00084d7c0) (1) Data frame handling\nI0512 08:19:55.764763 2448 log.go:172] (0xc00084d7c0) (1) Data frame sent\nI0512 08:19:55.764780 2448 log.go:172] (0xc000138630) (0xc00084d7c0) Stream removed, broadcasting: 1\nI0512 08:19:55.764799 2448 log.go:172] (0xc000138630) Go away received\nI0512 08:19:55.765103 2448 log.go:172] (0xc000138630) (0xc00084d7c0) Stream removed, broadcasting: 1\nI0512 08:19:55.765360 2448 log.go:172] (0xc000138630) (0xc0003aa5a0) Stream removed, broadcasting: 3\nI0512 08:19:55.765375 2448 log.go:172] (0xc000138630) (0xc00059e000) Stream removed, broadcasting: 5\n" May 12 08:19:55.768: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:19:55.769: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:19:55.769: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:19:55.780: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 08:20:05.788: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 08:20:05.788: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 08:20:05.788: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 08:20:05.818: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:05.818: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC }] May 12 08:20:05.818: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:05.818: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:05.818: INFO: May 12 08:20:05.818: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 08:20:06.824: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:06.824: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC }] May 12 08:20:06.824: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:06.824: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:06.824: INFO: May 12 08:20:06.824: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 08:20:07.827: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:07.827: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC }] May 12 08:20:07.827: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:07.827: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:07.827: INFO: May 12 08:20:07.827: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 08:20:08.839: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:08.839: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC }] May 12 08:20:08.839: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:08.839: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:08.839: INFO: May 12 08:20:08.839: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 08:20:09.895: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:09.895: INFO: ss-0 hunter-worker2 Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:13 +0000 UTC }] May 12 08:20:09.895: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:09.895: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:09.895: INFO: May 12 08:20:09.895: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 08:20:10.900: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:10.900: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:10.900: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:10.900: INFO: May 12 08:20:10.900: INFO: StatefulSet ss has not reached scale 0, at 2 May 12 08:20:12.147: INFO: POD NODE PHASE GRACE CONDITIONS May 12 08:20:12.147: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 08:19:33 +0000 UTC }] May 12 08:20:12.147: INFO: May 12 08:20:12.147: INFO: StatefulSet ss has not reached scale 0, at 1 May 12 08:20:13.151: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.648326676s May 12 08:20:14.154: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.644283743s May 12 08:20:15.159: INFO: Verifying statefulset ss doesn't scale past 0 for another 640.888272ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-hv6wt May 12 08:20:16.163: INFO: Scaling statefulset ss to 0 May 12 08:20:16.173: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 08:20:16.175: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hv6wt May 12 08:20:16.177: INFO: Scaling statefulset ss to 0 May 12 08:20:16.185: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:20:16.187: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:20:16.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hv6wt" for this suite. May 12 08:20:22.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:20:22.273: INFO: namespace: e2e-tests-statefulset-hv6wt, resource: bindings, ignored listing per whitelist May 12 08:20:22.327: INFO: namespace e2e-tests-statefulset-hv6wt deletion completed in 6.122917821s • [SLOW TEST:71.568 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:20:22.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9hbqw STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 08:20:22.609: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 08:20:50.938: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.147 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9hbqw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:20:50.938: INFO: >>> kubeConfig: /root/.kube/config I0512 08:20:50.963292 6 log.go:172] (0xc0011744d0) (0xc001e34140) Create stream I0512 08:20:50.963326 6 log.go:172] (0xc0011744d0) (0xc001e34140) Stream added, broadcasting: 1 I0512 08:20:50.965710 6 log.go:172] (0xc0011744d0) Reply frame received for 1 I0512 08:20:50.965759 6 log.go:172] (0xc0011744d0) (0xc0025820a0) Create stream I0512 08:20:50.965778 6 log.go:172] (0xc0011744d0) (0xc0025820a0) Stream added, broadcasting: 3 I0512 08:20:50.966578 6 log.go:172] (0xc0011744d0) Reply frame received for 3 I0512 08:20:50.966608 6 log.go:172] (0xc0011744d0) (0xc002582140) Create stream I0512 08:20:50.966620 6 log.go:172] (0xc0011744d0) (0xc002582140) Stream added, broadcasting: 5 I0512 08:20:50.967336 6 log.go:172] (0xc0011744d0) Reply frame received for 5 I0512 08:20:52.039994 6 log.go:172] (0xc0011744d0) Data frame received for 5 I0512 08:20:52.040030 6 log.go:172] (0xc002582140) (5) Data frame handling I0512 08:20:52.040053 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 08:20:52.040065 6 log.go:172] (0xc0025820a0) (3) Data frame handling I0512 08:20:52.040083 6 log.go:172] (0xc0025820a0) (3) Data frame sent I0512 08:20:52.040091 6 log.go:172] (0xc0011744d0) Data frame received for 3 I0512 08:20:52.040102 6 log.go:172] (0xc0025820a0) (3) Data frame handling I0512 08:20:52.042114 6 log.go:172] (0xc0011744d0) Data frame received for 1 I0512 08:20:52.042134 6 log.go:172] (0xc001e34140) (1) Data frame handling I0512 08:20:52.042144 6 log.go:172] (0xc001e34140) (1) Data frame sent I0512 08:20:52.042158 6 log.go:172] (0xc0011744d0) (0xc001e34140) Stream removed, broadcasting: 1 I0512 08:20:52.042255 6 log.go:172] (0xc0011744d0) Go away received I0512 08:20:52.042321 6 log.go:172] (0xc0011744d0) (0xc001e34140) Stream removed, broadcasting: 1 I0512 08:20:52.042361 6 log.go:172] (0xc0011744d0) (0xc0025820a0) Stream removed, broadcasting: 3 I0512 08:20:52.042374 6 log.go:172] (0xc0011744d0) (0xc002582140) Stream removed, broadcasting: 5 May 12 08:20:52.042: INFO: Found all expected endpoints: [netserver-0] May 12 08:20:52.046: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.72 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9hbqw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 08:20:52.046: INFO: >>> kubeConfig: /root/.kube/config I0512 08:20:52.077857 6 log.go:172] (0xc0005d8d10) (0xc000cc4780) Create stream I0512 08:20:52.077884 6 log.go:172] (0xc0005d8d10) (0xc000cc4780) Stream added, broadcasting: 1 I0512 08:20:52.079999 6 log.go:172] (0xc0005d8d10) Reply frame received for 1 I0512 08:20:52.080032 6 log.go:172] (0xc0005d8d10) (0xc001e341e0) Create stream I0512 08:20:52.080042 6 log.go:172] (0xc0005d8d10) (0xc001e341e0) Stream added, broadcasting: 3 I0512 08:20:52.081074 6 log.go:172] (0xc0005d8d10) Reply frame received for 3 I0512 08:20:52.081335 6 log.go:172] (0xc0005d8d10) (0xc001e34280) Create stream I0512 08:20:52.081360 6 log.go:172] (0xc0005d8d10) (0xc001e34280) Stream added, broadcasting: 5 I0512 08:20:52.082394 6 log.go:172] (0xc0005d8d10) Reply frame received for 5 I0512 08:20:53.163466 6 log.go:172] (0xc0005d8d10) Data frame received for 3 I0512 08:20:53.163514 6 log.go:172] (0xc001e341e0) (3) Data frame handling I0512 08:20:53.163540 6 log.go:172] (0xc001e341e0) (3) Data frame sent I0512 08:20:53.163604 6 log.go:172] (0xc0005d8d10) Data frame received for 5 I0512 08:20:53.163630 6 log.go:172] (0xc001e34280) (5) Data frame handling I0512 08:20:53.163827 6 log.go:172] (0xc0005d8d10) Data frame received for 3 I0512 08:20:53.163857 6 log.go:172] (0xc001e341e0) (3) Data frame handling I0512 08:20:53.165985 6 log.go:172] (0xc0005d8d10) Data frame received for 1 I0512 08:20:53.166021 6 log.go:172] (0xc000cc4780) (1) Data frame handling I0512 08:20:53.166059 6 log.go:172] (0xc000cc4780) (1) Data frame sent I0512 08:20:53.166085 6 log.go:172] (0xc0005d8d10) (0xc000cc4780) Stream removed, broadcasting: 1 I0512 08:20:53.166106 6 log.go:172] (0xc0005d8d10) Go away received I0512 08:20:53.166291 6 log.go:172] (0xc0005d8d10) (0xc000cc4780) Stream removed, broadcasting: 1 I0512 08:20:53.166324 6 log.go:172] (0xc0005d8d10) (0xc001e341e0) Stream removed, broadcasting: 3 I0512 08:20:53.166348 6 log.go:172] (0xc0005d8d10) (0xc001e34280) Stream removed, broadcasting: 5 May 12 08:20:53.166: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:20:53.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9hbqw" for this suite. May 12 08:21:17.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:21:17.197: INFO: namespace: e2e-tests-pod-network-test-9hbqw, resource: bindings, ignored listing per whitelist May 12 08:21:17.241: INFO: namespace e2e-tests-pod-network-test-9hbqw deletion completed in 24.070766544s • [SLOW TEST:54.914 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:21:17.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0512 08:21:27.473087 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 08:21:27.473: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:21:27.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-w5gv9" for this suite. May 12 08:21:35.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:21:35.520: INFO: namespace: e2e-tests-gc-w5gv9, resource: bindings, ignored listing per whitelist May 12 08:21:35.564: INFO: namespace e2e-tests-gc-w5gv9 deletion completed in 8.088339107s • [SLOW TEST:18.322 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:21:35.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lwlkj May 12 08:21:41.847: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lwlkj STEP: checking the pod's current state and verifying that restartCount is present May 12 08:21:41.850: INFO: Initial restart count of pod liveness-http is 0 May 12 08:21:59.887: INFO: Restart count of pod e2e-tests-container-probe-lwlkj/liveness-http is now 1 (18.036635421s elapsed) May 12 08:22:19.936: INFO: Restart count of pod e2e-tests-container-probe-lwlkj/liveness-http is now 2 (38.086088592s elapsed) May 12 08:22:39.991: INFO: Restart count of pod e2e-tests-container-probe-lwlkj/liveness-http is now 3 (58.141318006s elapsed) May 12 08:23:00.044: INFO: Restart count of pod e2e-tests-container-probe-lwlkj/liveness-http is now 4 (1m18.193575709s elapsed) May 12 08:24:00.474: INFO: Restart count of pod e2e-tests-container-probe-lwlkj/liveness-http is now 5 (2m18.623964603s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:24:00.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lwlkj" for this suite. May 12 08:24:06.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:24:06.589: INFO: namespace: e2e-tests-container-probe-lwlkj, resource: bindings, ignored listing per whitelist May 12 08:24:06.649: INFO: namespace e2e-tests-container-probe-lwlkj deletion completed in 6.097993765s • [SLOW TEST:151.085 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:24:06.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 08:24:06.747: INFO: Waiting up to 5m0s for pod "pod-f2454ca3-9429-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-jlv6j" to be "success or failure" May 12 08:24:06.760: INFO: Pod "pod-f2454ca3-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.725246ms May 12 08:24:08.764: INFO: Pod "pod-f2454ca3-9429-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016939412s May 12 08:24:10.768: INFO: Pod "pod-f2454ca3-9429-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021607687s STEP: Saw pod success May 12 08:24:10.769: INFO: Pod "pod-f2454ca3-9429-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:24:10.772: INFO: Trying to get logs from node hunter-worker2 pod pod-f2454ca3-9429-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:24:11.007: INFO: Waiting for pod pod-f2454ca3-9429-11ea-bb6f-0242ac11001c to disappear May 12 08:24:11.049: INFO: Pod pod-f2454ca3-9429-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:24:11.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jlv6j" for this suite. May 12 08:24:17.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:24:17.577: INFO: namespace: e2e-tests-emptydir-jlv6j, resource: bindings, ignored listing per whitelist May 12 08:24:17.617: INFO: namespace e2e-tests-emptydir-jlv6j deletion completed in 6.563953111s • [SLOW TEST:10.968 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:24:17.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 12 08:24:17.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-spn28' May 12 08:24:20.529: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 08:24:20.529: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 12 08:24:24.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-spn28' May 12 08:24:24.981: INFO: stderr: "" May 12 08:24:24.981: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:24:24.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-spn28" for this suite. May 12 08:25:47.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:25:47.055: INFO: namespace: e2e-tests-kubectl-spn28, resource: bindings, ignored listing per whitelist May 12 08:25:47.094: INFO: namespace e2e-tests-kubectl-spn28 deletion completed in 1m22.09222875s • [SLOW TEST:89.477 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:25:47.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:25:47.553: INFO: Creating ReplicaSet my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c May 12 08:25:47.749: INFO: Pod name my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c: Found 0 pods out of 1 May 12 08:25:52.754: INFO: Pod name my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c: Found 1 pods out of 1 May 12 08:25:52.754: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c" is running May 12 08:25:52.757: INFO: Pod "my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c-tvtgv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:25:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:25:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:25:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:25:47 +0000 UTC Reason: Message:}]) May 12 08:25:52.757: INFO: Trying to dial the pod May 12 08:25:57.845: INFO: Controller my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c: Got expected result from replica 1 [my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c-tvtgv]: "my-hostname-basic-2e5bf82f-942a-11ea-bb6f-0242ac11001c-tvtgv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:25:57.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-r8vf8" for this suite. May 12 08:26:03.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:26:03.981: INFO: namespace: e2e-tests-replicaset-r8vf8, resource: bindings, ignored listing per whitelist May 12 08:26:04.048: INFO: namespace e2e-tests-replicaset-r8vf8 deletion completed in 6.198289738s • [SLOW TEST:16.953 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:26:04.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zmvqx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zmvqx to expose endpoints map[] May 12 08:26:04.207: INFO: Get endpoints failed (15.311339ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 12 08:26:05.284: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zmvqx exposes endpoints map[] (1.091984676s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zmvqx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zmvqx to expose endpoints map[pod1:[100]] May 12 08:26:09.458: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zmvqx exposes endpoints map[pod1:[100]] (4.166015083s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zmvqx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zmvqx to expose endpoints map[pod2:[101] pod1:[100]] May 12 08:26:13.663: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zmvqx exposes endpoints map[pod1:[100] pod2:[101]] (4.20161343s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zmvqx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zmvqx to expose endpoints map[pod2:[101]] May 12 08:26:14.718: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zmvqx exposes endpoints map[pod2:[101]] (1.050779902s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zmvqx STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zmvqx to expose endpoints map[] May 12 08:26:15.752: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zmvqx exposes endpoints map[] (1.029678675s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:26:15.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zmvqx" for this suite. May 12 08:26:21.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:26:21.927: INFO: namespace: e2e-tests-services-zmvqx, resource: bindings, ignored listing per whitelist May 12 08:26:22.060: INFO: namespace e2e-tests-services-zmvqx deletion completed in 6.180731606s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:18.012 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:26:22.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-42fd95ef-942a-11ea-bb6f-0242ac11001c STEP: Creating secret with name secret-projected-all-test-volume-42fd95be-942a-11ea-bb6f-0242ac11001c STEP: Creating a pod to test Check all projections for projected volume plugin May 12 08:26:22.209: INFO: Waiting up to 5m0s for pod "projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-zv5p7" to be "success or failure" May 12 08:26:22.225: INFO: Pod "projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.13588ms May 12 08:26:24.230: INFO: Pod "projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020567s May 12 08:26:26.234: INFO: Pod "projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024397928s STEP: Saw pod success May 12 08:26:26.234: INFO: Pod "projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:26:26.236: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c container projected-all-volume-test: STEP: delete the pod May 12 08:26:26.270: INFO: Waiting for pod projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c to disappear May 12 08:26:26.292: INFO: Pod projected-volume-42fd9534-942a-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:26:26.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zv5p7" for this suite. May 12 08:26:32.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:26:32.421: INFO: namespace: e2e-tests-projected-zv5p7, resource: bindings, ignored listing per whitelist May 12 08:26:32.452: INFO: namespace e2e-tests-projected-zv5p7 deletion completed in 6.156668327s • [SLOW TEST:10.391 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:26:32.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4930200e-942a-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:26:32.648: INFO: Waiting up to 5m0s for pod "pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-j54tp" to be "success or failure" May 12 08:26:32.657: INFO: Pod "pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.82286ms May 12 08:26:34.744: INFO: Pod "pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09530375s May 12 08:26:36.748: INFO: Pod "pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09981772s May 12 08:26:38.827: INFO: Pod "pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179139779s STEP: Saw pod success May 12 08:26:38.828: INFO: Pod "pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:26:38.830: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 08:26:38.924: INFO: Waiting for pod pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c to disappear May 12 08:26:39.073: INFO: Pod pod-secrets-493bdc61-942a-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:26:39.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-j54tp" for this suite. May 12 08:26:45.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:26:45.442: INFO: namespace: e2e-tests-secrets-j54tp, resource: bindings, ignored listing per whitelist May 12 08:26:45.446: INFO: namespace e2e-tests-secrets-j54tp deletion completed in 6.369355155s • [SLOW TEST:12.994 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:26:45.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 12 08:26:52.588: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:26:53.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-rmmfj" for this suite. May 12 08:27:21.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:27:21.811: INFO: namespace: e2e-tests-replicaset-rmmfj, resource: bindings, ignored listing per whitelist May 12 08:27:21.811: INFO: namespace e2e-tests-replicaset-rmmfj deletion completed in 28.199712253s • [SLOW TEST:36.365 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:27:21.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 12 08:27:26.735: INFO: Successfully updated pod "annotationupdate66a71e7a-942a-11ea-bb6f-0242ac11001c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:27:28.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vrfzc" for this suite. May 12 08:27:52.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:27:52.879: INFO: namespace: e2e-tests-projected-vrfzc, resource: bindings, ignored listing per whitelist May 12 08:27:52.931: INFO: namespace e2e-tests-projected-vrfzc deletion completed in 24.158314688s • [SLOW TEST:31.119 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:27:52.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 08:27:53.846: INFO: Waiting up to 5m0s for pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-bkl47" to be "success or failure" May 12 08:27:53.859: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.371102ms May 12 08:27:55.984: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138076784s May 12 08:27:58.248: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401768077s May 12 08:28:00.252: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.406545033s May 12 08:28:02.257: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411033743s May 12 08:28:04.756: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 10.910682741s May 12 08:28:06.760: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.914375302s STEP: Saw pod success May 12 08:28:06.760: INFO: Pod "pod-797a645c-942a-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:28:06.764: INFO: Trying to get logs from node hunter-worker pod pod-797a645c-942a-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:28:07.220: INFO: Waiting for pod pod-797a645c-942a-11ea-bb6f-0242ac11001c to disappear May 12 08:28:07.265: INFO: Pod pod-797a645c-942a-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:28:07.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bkl47" for this suite. May 12 08:28:13.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:28:13.891: INFO: namespace: e2e-tests-emptydir-bkl47, resource: bindings, ignored listing per whitelist May 12 08:28:13.910: INFO: namespace e2e-tests-emptydir-bkl47 deletion completed in 6.64236667s • [SLOW TEST:20.979 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:28:13.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nqffn;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-nqffn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-nqffn.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nqffn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.100.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.100.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.100.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.100.172_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nqffn;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nqffn;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-nqffn.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-nqffn.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-nqffn.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-nqffn.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-nqffn.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 172.100.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.100.172_udp@PTR;check="$$(dig +tcp +noall +answer +search 172.100.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.100.172_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 08:28:26.438: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.446: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.453: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.473: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.475: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.477: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.480: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.482: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.484: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.486: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.489: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:26.506: INFO: Lookups using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nqffn jessie_tcp@dns-test-service.e2e-tests-dns-nqffn jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc] May 12 08:28:31.686: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:31.745: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:31.750: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.018: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.020: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.023: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.025: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.028: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.031: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.033: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:32.055: INFO: Lookups using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nqffn jessie_tcp@dns-test-service.e2e-tests-dns-nqffn jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc] May 12 08:28:36.524: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.534: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.539: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.562: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.564: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.567: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.570: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.572: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.575: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.577: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.579: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:36.632: INFO: Lookups using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nqffn jessie_tcp@dns-test-service.e2e-tests-dns-nqffn jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc] May 12 08:28:41.510: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.520: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.527: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.609: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.611: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.614: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.617: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.620: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.623: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.626: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.629: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:41.647: INFO: Lookups using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nqffn jessie_tcp@dns-test-service.e2e-tests-dns-nqffn jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc] May 12 08:28:46.866: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.912: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.917: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.941: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.943: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.945: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.947: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.949: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.952: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.954: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:46.957: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:47.095: INFO: Lookups using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nqffn jessie_tcp@dns-test-service.e2e-tests-dns-nqffn jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc] May 12 08:28:51.511: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.519: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.525: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.545: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.548: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.550: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.552: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.555: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.558: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.560: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:51.563: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc from pod e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c: the server could not find the requested resource (get pods dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c) May 12 08:28:52.121: INFO: Lookups using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn wheezy_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-nqffn jessie_tcp@dns-test-service.e2e-tests-dns-nqffn jessie_udp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@dns-test-service.e2e-tests-dns-nqffn.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-nqffn.svc] May 12 08:28:56.722: INFO: DNS probes using e2e-tests-dns-nqffn/dns-test-85c921d4-942a-11ea-bb6f-0242ac11001c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:28:59.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-nqffn" for this suite. May 12 08:29:16.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:29:16.780: INFO: namespace: e2e-tests-dns-nqffn, resource: bindings, ignored listing per whitelist May 12 08:29:16.816: INFO: namespace e2e-tests-dns-nqffn deletion completed in 16.727572197s • [SLOW TEST:62.905 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:29:16.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 12 08:29:17.811: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125495,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 08:29:17.811: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125495,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 12 08:29:27.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125514,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 08:29:27.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125514,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 12 08:29:37.824: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125534,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 08:29:37.824: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125534,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 12 08:29:47.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125554,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 08:29:47.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-a,UID:ab83590a-942a-11ea-99e8-0242ac110002,ResourceVersion:10125554,Generation:0,CreationTimestamp:2020-05-12 08:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 12 08:29:58.083: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-b,UID:c38cc0f3-942a-11ea-99e8-0242ac110002,ResourceVersion:10125572,Generation:0,CreationTimestamp:2020-05-12 08:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 08:29:58.083: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-b,UID:c38cc0f3-942a-11ea-99e8-0242ac110002,ResourceVersion:10125572,Generation:0,CreationTimestamp:2020-05-12 08:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 12 08:30:08.095: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-b,UID:c38cc0f3-942a-11ea-99e8-0242ac110002,ResourceVersion:10125592,Generation:0,CreationTimestamp:2020-05-12 08:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 08:30:08.095: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-j9wbt,SelfLink:/api/v1/namespaces/e2e-tests-watch-j9wbt/configmaps/e2e-watch-test-configmap-b,UID:c38cc0f3-942a-11ea-99e8-0242ac110002,ResourceVersion:10125592,Generation:0,CreationTimestamp:2020-05-12 08:29:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:30:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-j9wbt" for this suite. May 12 08:30:24.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:30:24.431: INFO: namespace: e2e-tests-watch-j9wbt, resource: bindings, ignored listing per whitelist May 12 08:30:24.440: INFO: namespace e2e-tests-watch-j9wbt deletion completed in 6.338823644s • [SLOW TEST:67.624 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:30:24.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 12 08:30:24.731: INFO: Waiting up to 5m0s for pod "var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-var-expansion-mgzx9" to be "success or failure" May 12 08:30:24.742: INFO: Pod "var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.489905ms May 12 08:30:26.824: INFO: Pod "var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093169778s May 12 08:30:28.844: INFO: Pod "var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11299628s May 12 08:30:30.854: INFO: Pod "var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123085527s STEP: Saw pod success May 12 08:30:30.854: INFO: Pod "var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:30:30.856: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 08:30:30.907: INFO: Waiting for pod var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c to disappear May 12 08:30:30.928: INFO: Pod var-expansion-d38ab17f-942a-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:30:30.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-mgzx9" for this suite. May 12 08:30:37.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:30:37.275: INFO: namespace: e2e-tests-var-expansion-mgzx9, resource: bindings, ignored listing per whitelist May 12 08:30:37.331: INFO: namespace e2e-tests-var-expansion-mgzx9 deletion completed in 6.399698278s • [SLOW TEST:12.891 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:30:37.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-db4adc85-942a-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:30:37.796: INFO: Waiting up to 5m0s for pod "pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-rs798" to be "success or failure" May 12 08:30:37.811: INFO: Pod "pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.791435ms May 12 08:30:39.815: INFO: Pod "pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018271978s May 12 08:30:41.827: INFO: Pod "pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030150972s STEP: Saw pod success May 12 08:30:41.827: INFO: Pod "pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:30:41.829: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 08:30:41.854: INFO: Waiting for pod pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c to disappear May 12 08:30:41.859: INFO: Pod pod-secrets-db5a3a16-942a-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:30:41.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rs798" for this suite. May 12 08:30:47.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:30:47.967: INFO: namespace: e2e-tests-secrets-rs798, resource: bindings, ignored listing per whitelist May 12 08:30:47.975: INFO: namespace e2e-tests-secrets-rs798 deletion completed in 6.113989962s • [SLOW TEST:10.644 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:30:47.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:30:48.345: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 12 08:30:53.349: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 08:30:55.358: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 08:30:55.802: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-kx6r8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kx6r8/deployments/test-cleanup-deployment,UID:e5d3e262-942a-11ea-99e8-0242ac110002,ResourceVersion:10125751,Generation:1,CreationTimestamp:2020-05-12 08:30:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 12 08:30:55.805: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:30:55.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kx6r8" for this suite. May 12 08:31:09.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:31:09.268: INFO: namespace: e2e-tests-deployment-kx6r8, resource: bindings, ignored listing per whitelist May 12 08:31:09.275: INFO: namespace e2e-tests-deployment-kx6r8 deletion completed in 13.376066798s • [SLOW TEST:21.300 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:31:09.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:31:13.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tq7vv" for this suite. May 12 08:32:08.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:32:08.074: INFO: namespace: e2e-tests-kubelet-test-tq7vv, resource: bindings, ignored listing per whitelist May 12 08:32:08.104: INFO: namespace e2e-tests-kubelet-test-tq7vv deletion completed in 54.255431248s • [SLOW TEST:58.828 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:32:08.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:32:15.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-jftnv" for this suite. May 12 08:32:21.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:32:21.946: INFO: namespace: e2e-tests-namespaces-jftnv, resource: bindings, ignored listing per whitelist May 12 08:32:21.951: INFO: namespace e2e-tests-namespaces-jftnv deletion completed in 6.505174073s STEP: Destroying namespace "e2e-tests-nsdeletetest-h7gvt" for this suite. May 12 08:32:21.954: INFO: Namespace e2e-tests-nsdeletetest-h7gvt was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-mdf9h" for this suite. May 12 08:32:28.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:32:28.179: INFO: namespace: e2e-tests-nsdeletetest-mdf9h, resource: bindings, ignored listing per whitelist May 12 08:32:28.223: INFO: namespace e2e-tests-nsdeletetest-mdf9h deletion completed in 6.269750979s • [SLOW TEST:20.119 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:32:28.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 08:32:28.383: INFO: Waiting up to 5m0s for pod "pod-1d4415cf-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-z79fk" to be "success or failure" May 12 08:32:28.386: INFO: Pod "pod-1d4415cf-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.66059ms May 12 08:32:30.390: INFO: Pod "pod-1d4415cf-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007471179s May 12 08:32:32.394: INFO: Pod "pod-1d4415cf-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011446577s STEP: Saw pod success May 12 08:32:32.394: INFO: Pod "pod-1d4415cf-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:32:32.397: INFO: Trying to get logs from node hunter-worker pod pod-1d4415cf-942b-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:32:32.828: INFO: Waiting for pod pod-1d4415cf-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:32:32.831: INFO: Pod pod-1d4415cf-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:32:32.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z79fk" for this suite. May 12 08:32:40.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:32:41.127: INFO: namespace: e2e-tests-emptydir-z79fk, resource: bindings, ignored listing per whitelist May 12 08:32:41.134: INFO: namespace e2e-tests-emptydir-z79fk deletion completed in 8.299509932s • [SLOW TEST:12.911 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:32:41.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:32:41.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-k2kcl" to be "success or failure" May 12 08:32:41.341: INFO: Pod "downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.100708ms May 12 08:32:43.345: INFO: Pod "downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019987308s May 12 08:32:45.495: INFO: Pod "downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.169390856s May 12 08:32:47.508: INFO: Pod "downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.182855097s STEP: Saw pod success May 12 08:32:47.508: INFO: Pod "downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:32:47.511: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:32:47.759: INFO: Waiting for pod downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:32:48.000: INFO: Pod downwardapi-volume-24fa83cf-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:32:48.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k2kcl" for this suite. May 12 08:32:54.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:32:54.134: INFO: namespace: e2e-tests-projected-k2kcl, resource: bindings, ignored listing per whitelist May 12 08:32:54.171: INFO: namespace e2e-tests-projected-k2kcl deletion completed in 6.167210481s • [SLOW TEST:13.037 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:32:54.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:32:54.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9mttv" for this suite. May 12 08:33:16.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:33:16.447: INFO: namespace: e2e-tests-pods-9mttv, resource: bindings, ignored listing per whitelist May 12 08:33:16.989: INFO: namespace e2e-tests-pods-9mttv deletion completed in 22.675045746s • [SLOW TEST:22.818 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:33:16.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3a637513-942b-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:33:17.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-2t76w" to be "success or failure" May 12 08:33:17.534: INFO: Pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 212.751046ms May 12 08:33:19.537: INFO: Pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215969241s May 12 08:33:21.540: INFO: Pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218820407s May 12 08:33:23.593: INFO: Pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.271572035s May 12 08:33:25.597: INFO: Pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.276383683s STEP: Saw pod success May 12 08:33:25.597: INFO: Pod "pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:33:25.600: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c container configmap-volume-test: STEP: delete the pod May 12 08:33:25.688: INFO: Waiting for pod pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:33:25.705: INFO: Pod pod-configmaps-3a6f9720-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:33:25.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2t76w" for this suite. May 12 08:33:31.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:33:31.860: INFO: namespace: e2e-tests-configmap-2t76w, resource: bindings, ignored listing per whitelist May 12 08:33:31.903: INFO: namespace e2e-tests-configmap-2t76w deletion completed in 6.194012429s • [SLOW TEST:14.914 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:33:31.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 12 08:33:32.188: INFO: Waiting up to 5m0s for pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-containers-kzzz9" to be "success or failure" May 12 08:33:32.266: INFO: Pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 78.558518ms May 12 08:33:34.270: INFO: Pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082889464s May 12 08:33:36.275: INFO: Pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086971517s May 12 08:33:38.278: INFO: Pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090032552s May 12 08:33:40.281: INFO: Pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093237245s STEP: Saw pod success May 12 08:33:40.281: INFO: Pod "client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:33:40.283: INFO: Trying to get logs from node hunter-worker pod client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:33:40.392: INFO: Waiting for pod client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:33:40.408: INFO: Pod client-containers-4341fb44-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:33:40.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-kzzz9" for this suite. May 12 08:33:46.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:33:46.486: INFO: namespace: e2e-tests-containers-kzzz9, resource: bindings, ignored listing per whitelist May 12 08:33:46.546: INFO: namespace e2e-tests-containers-kzzz9 deletion completed in 6.134102651s • [SLOW TEST:14.642 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:33:46.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:33:46.674: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-hms22" to be "success or failure" May 12 08:33:46.698: INFO: Pod "downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.22654ms May 12 08:33:48.719: INFO: Pod "downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045254829s May 12 08:33:50.858: INFO: Pod "downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18365956s May 12 08:33:53.019: INFO: Pod "downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345341539s STEP: Saw pod success May 12 08:33:53.019: INFO: Pod "downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:33:53.023: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:33:53.045: INFO: Waiting for pod downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:33:53.297: INFO: Pod downwardapi-volume-4beefb01-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:33:53.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hms22" for this suite. May 12 08:34:01.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:34:01.596: INFO: namespace: e2e-tests-projected-hms22, resource: bindings, ignored listing per whitelist May 12 08:34:01.632: INFO: namespace e2e-tests-projected-hms22 deletion completed in 8.329930399s • [SLOW TEST:15.086 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:34:01.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:34:01.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 12 08:34:02.102: INFO: stderr: "" May 12 08:34:02.102: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:34:02.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7m9pt" for this suite. May 12 08:34:08.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:34:08.148: INFO: namespace: e2e-tests-kubectl-7m9pt, resource: bindings, ignored listing per whitelist May 12 08:34:08.200: INFO: namespace e2e-tests-kubectl-7m9pt deletion completed in 6.093028755s • [SLOW TEST:6.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:34:08.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-58d6a546-942b-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:34:08.358: INFO: Waiting up to 5m0s for pod "pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-qdtdv" to be "success or failure" May 12 08:34:08.371: INFO: Pod "pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.987562ms May 12 08:34:10.375: INFO: Pod "pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017076511s May 12 08:34:12.379: INFO: Pod "pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.021015s May 12 08:34:14.382: INFO: Pod "pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024369466s STEP: Saw pod success May 12 08:34:14.382: INFO: Pod "pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:34:14.385: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 08:34:14.523: INFO: Waiting for pod pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:34:14.553: INFO: Pod pod-secrets-58d748b5-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:34:14.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qdtdv" for this suite. May 12 08:34:21.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:34:21.151: INFO: namespace: e2e-tests-secrets-qdtdv, resource: bindings, ignored listing per whitelist May 12 08:34:21.181: INFO: namespace e2e-tests-secrets-qdtdv deletion completed in 6.624561275s • [SLOW TEST:12.981 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:34:21.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 12 08:34:21.524: INFO: Waiting up to 5m0s for pod "pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-qkp9m" to be "success or failure" May 12 08:34:21.560: INFO: Pod "pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.809718ms May 12 08:34:23.564: INFO: Pod "pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039598765s May 12 08:34:25.569: INFO: Pod "pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043980741s May 12 08:34:27.571: INFO: Pod "pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046872512s STEP: Saw pod success May 12 08:34:27.571: INFO: Pod "pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:34:27.573: INFO: Trying to get logs from node hunter-worker2 pod pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:34:27.794: INFO: Waiting for pod pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:34:28.144: INFO: Pod pod-60ad9fcd-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:34:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qkp9m" for this suite. May 12 08:34:34.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:34:34.182: INFO: namespace: e2e-tests-emptydir-qkp9m, resource: bindings, ignored listing per whitelist May 12 08:34:34.245: INFO: namespace e2e-tests-emptydir-qkp9m deletion completed in 6.097404826s • [SLOW TEST:13.064 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:34:34.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-685b6838-942b-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-685b6838-942b-11ea-bb6f-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:36:05.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2nkv5" for this suite. May 12 08:36:28.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:36:28.103: INFO: namespace: e2e-tests-configmap-2nkv5, resource: bindings, ignored listing per whitelist May 12 08:36:28.120: INFO: namespace e2e-tests-configmap-2nkv5 deletion completed in 22.120322855s • [SLOW TEST:113.875 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:36:28.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:36:35.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-gqvn6" for this suite. May 12 08:36:59.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:36:59.527: INFO: namespace: e2e-tests-replication-controller-gqvn6, resource: bindings, ignored listing per whitelist May 12 08:36:59.557: INFO: namespace e2e-tests-replication-controller-gqvn6 deletion completed in 24.24635439s • [SLOW TEST:31.436 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:36:59.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:36:59.728: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 08:36:59.736: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:36:59.739: INFO: Number of nodes with available pods: 0 May 12 08:36:59.739: INFO: Node hunter-worker is running more than one daemon pod May 12 08:37:00.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:00.748: INFO: Number of nodes with available pods: 0 May 12 08:37:00.748: INFO: Node hunter-worker is running more than one daemon pod May 12 08:37:01.953: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:02.226: INFO: Number of nodes with available pods: 0 May 12 08:37:02.226: INFO: Node hunter-worker is running more than one daemon pod May 12 08:37:02.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:02.747: INFO: Number of nodes with available pods: 0 May 12 08:37:02.747: INFO: Node hunter-worker is running more than one daemon pod May 12 08:37:03.748: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:03.752: INFO: Number of nodes with available pods: 0 May 12 08:37:03.752: INFO: Node hunter-worker is running more than one daemon pod May 12 08:37:04.743: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:04.746: INFO: Number of nodes with available pods: 2 May 12 08:37:04.746: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 12 08:37:04.798: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:04.798: INFO: Wrong image for pod: daemon-set-nbz4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:04.814: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:05.819: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:05.819: INFO: Wrong image for pod: daemon-set-nbz4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:05.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:06.820: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:06.820: INFO: Wrong image for pod: daemon-set-nbz4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:06.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:07.819: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:07.819: INFO: Wrong image for pod: daemon-set-nbz4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:07.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:08.820: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:08.820: INFO: Wrong image for pod: daemon-set-nbz4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:08.820: INFO: Pod daemon-set-nbz4q is not available May 12 08:37:08.825: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:10.075: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:10.075: INFO: Wrong image for pod: daemon-set-nbz4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:10.075: INFO: Pod daemon-set-nbz4q is not available May 12 08:37:10.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:11.045: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:11.045: INFO: Pod daemon-set-bbrgh is not available May 12 08:37:11.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:11.820: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:11.820: INFO: Pod daemon-set-bbrgh is not available May 12 08:37:11.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:12.819: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:12.819: INFO: Pod daemon-set-bbrgh is not available May 12 08:37:12.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:13.842: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:13.842: INFO: Pod daemon-set-bbrgh is not available May 12 08:37:13.846: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:14.818: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:14.818: INFO: Pod daemon-set-bbrgh is not available May 12 08:37:14.822: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:16.226: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:16.226: INFO: Pod daemon-set-bbrgh is not available May 12 08:37:16.230: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:16.946: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:16.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:17.819: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:17.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:18.819: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:18.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:19.820: INFO: Wrong image for pod: daemon-set-7cpnq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 12 08:37:19.820: INFO: Pod daemon-set-7cpnq is not available May 12 08:37:19.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:20.820: INFO: Pod daemon-set-4sj96 is not available May 12 08:37:20.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 12 08:37:20.827: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:20.830: INFO: Number of nodes with available pods: 1 May 12 08:37:20.830: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:37:21.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:21.839: INFO: Number of nodes with available pods: 1 May 12 08:37:21.839: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:37:22.836: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:22.840: INFO: Number of nodes with available pods: 1 May 12 08:37:22.840: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:37:23.974: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:24.011: INFO: Number of nodes with available pods: 1 May 12 08:37:24.011: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:37:24.837: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:24.839: INFO: Number of nodes with available pods: 1 May 12 08:37:24.839: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:37:26.053: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:37:26.101: INFO: Number of nodes with available pods: 2 May 12 08:37:26.101: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rsghn, will wait for the garbage collector to delete the pods May 12 08:37:26.255: INFO: Deleting DaemonSet.extensions daemon-set took: 88.79124ms May 12 08:37:26.355: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237018ms May 12 08:37:30.461: INFO: Number of nodes with available pods: 0 May 12 08:37:30.461: INFO: Number of running nodes: 0, number of available pods: 0 May 12 08:37:30.463: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rsghn/daemonsets","resourceVersion":"10126924"},"items":null} May 12 08:37:30.465: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rsghn/pods","resourceVersion":"10126924"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:37:30.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-rsghn" for this suite. May 12 08:37:36.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:37:36.574: INFO: namespace: e2e-tests-daemonsets-rsghn, resource: bindings, ignored listing per whitelist May 12 08:37:36.623: INFO: namespace e2e-tests-daemonsets-rsghn deletion completed in 6.147114024s • [SLOW TEST:37.066 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:37:36.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 12 08:37:36.739: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-kp6j8,SelfLink:/api/v1/namespaces/e2e-tests-watch-kp6j8/configmaps/e2e-watch-test-resource-version,UID:d50df753-942b-11ea-99e8-0242ac110002,ResourceVersion:10126975,Generation:0,CreationTimestamp:2020-05-12 08:37:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 08:37:36.739: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-kp6j8,SelfLink:/api/v1/namespaces/e2e-tests-watch-kp6j8/configmaps/e2e-watch-test-resource-version,UID:d50df753-942b-11ea-99e8-0242ac110002,ResourceVersion:10126976,Generation:0,CreationTimestamp:2020-05-12 08:37:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:37:36.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-kp6j8" for this suite. May 12 08:37:44.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:37:44.903: INFO: namespace: e2e-tests-watch-kp6j8, resource: bindings, ignored listing per whitelist May 12 08:37:44.953: INFO: namespace e2e-tests-watch-kp6j8 deletion completed in 8.210311273s • [SLOW TEST:8.330 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:37:44.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-dad86f94-942b-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:37:48.951: INFO: Waiting up to 5m0s for pod "pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-b28w2" to be "success or failure" May 12 08:37:49.351: INFO: Pod "pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 400.496204ms May 12 08:37:51.461: INFO: Pod "pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.510397903s May 12 08:37:53.464: INFO: Pod "pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.513522769s May 12 08:37:55.468: INFO: Pod "pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.517213866s STEP: Saw pod success May 12 08:37:55.468: INFO: Pod "pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:37:55.470: INFO: Trying to get logs from node hunter-worker pod pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c container secret-volume-test: STEP: delete the pod May 12 08:37:55.667: INFO: Waiting for pod pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c to disappear May 12 08:37:55.703: INFO: Pod pod-secrets-dc3f0a6b-942b-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:37:55.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b28w2" for this suite. May 12 08:38:07.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:38:08.278: INFO: namespace: e2e-tests-secrets-b28w2, resource: bindings, ignored listing per whitelist May 12 08:38:08.289: INFO: namespace e2e-tests-secrets-b28w2 deletion completed in 12.582275417s STEP: Destroying namespace "e2e-tests-secret-namespace-bwj5q" for this suite. May 12 08:38:18.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:38:19.057: INFO: namespace: e2e-tests-secret-namespace-bwj5q, resource: bindings, ignored listing per whitelist May 12 08:38:19.101: INFO: namespace e2e-tests-secret-namespace-bwj5q deletion completed in 10.81199061s • [SLOW TEST:34.148 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:38:19.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0512 08:38:34.470126 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 08:38:34.470: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:38:34.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rf6zc" for this suite. May 12 08:38:51.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:38:51.012: INFO: namespace: e2e-tests-gc-rf6zc, resource: bindings, ignored listing per whitelist May 12 08:38:51.574: INFO: namespace e2e-tests-gc-rf6zc deletion completed in 16.902847498s • [SLOW TEST:32.472 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:38:51.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 12 08:38:52.509: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 08:38:52.514: INFO: Waiting for terminating namespaces to be deleted... May 12 08:38:52.515: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 12 08:38:52.519: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 08:38:52.519: INFO: Container coredns ready: true, restart count 0 May 12 08:38:52.519: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 12 08:38:52.519: INFO: Container kube-proxy ready: true, restart count 0 May 12 08:38:52.519: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:38:52.519: INFO: Container kindnet-cni ready: true, restart count 0 May 12 08:38:52.519: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 12 08:38:52.524: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:38:52.524: INFO: Container kube-proxy ready: true, restart count 0 May 12 08:38:52.524: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:38:52.524: INFO: Container kindnet-cni ready: true, restart count 0 May 12 08:38:52.524: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 08:38:52.524: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 12 08:38:53.527: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 12 08:38:53.527: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 12 08:38:53.527: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 12 08:38:53.527: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 12 08:38:53.527: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 12 08:38:53.527: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-02d61395-942c-11ea-bb6f-0242ac11001c.160e3b21916407f1], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-hjd87/filler-pod-02d61395-942c-11ea-bb6f-0242ac11001c to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d61395-942c-11ea-bb6f-0242ac11001c.160e3b226b24eba9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d61395-942c-11ea-bb6f-0242ac11001c.160e3b2384cec6d2], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d61395-942c-11ea-bb6f-0242ac11001c.160e3b23b6361461], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d6c106-942c-11ea-bb6f-0242ac11001c.160e3b2195acb56f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-hjd87/filler-pod-02d6c106-942c-11ea-bb6f-0242ac11001c to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d6c106-942c-11ea-bb6f-0242ac11001c.160e3b2282ae3c35], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d6c106-942c-11ea-bb6f-0242ac11001c.160e3b23c178dda9], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-02d6c106-942c-11ea-bb6f-0242ac11001c.160e3b23fcd154cb], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e3b246dcabc0f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:39:08.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-hjd87" for this suite. May 12 08:39:23.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:39:23.062: INFO: namespace: e2e-tests-sched-pred-hjd87, resource: bindings, ignored listing per whitelist May 12 08:39:23.112: INFO: namespace e2e-tests-sched-pred-hjd87 deletion completed in 14.462112404s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:31.538 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:39:23.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:39:24.100: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:39:31.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-26zlt" for this suite. May 12 08:40:13.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:40:13.324: INFO: namespace: e2e-tests-pods-26zlt, resource: bindings, ignored listing per whitelist May 12 08:40:13.370: INFO: namespace e2e-tests-pods-26zlt deletion completed in 42.193679564s • [SLOW TEST:50.258 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:40:13.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 12 08:40:13.907: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 12 08:40:13.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:17.872: INFO: stderr: "" May 12 08:40:17.872: INFO: stdout: "service/redis-slave created\n" May 12 08:40:17.872: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 12 08:40:17.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:18.291: INFO: stderr: "" May 12 08:40:18.291: INFO: stdout: "service/redis-master created\n" May 12 08:40:18.291: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 08:40:18.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:18.738: INFO: stderr: "" May 12 08:40:18.738: INFO: stdout: "service/frontend created\n" May 12 08:40:18.738: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 12 08:40:18.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:19.035: INFO: stderr: "" May 12 08:40:19.035: INFO: stdout: "deployment.extensions/frontend created\n" May 12 08:40:19.035: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 08:40:19.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:19.464: INFO: stderr: "" May 12 08:40:19.464: INFO: stdout: "deployment.extensions/redis-master created\n" May 12 08:40:19.465: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 12 08:40:19.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:19.862: INFO: stderr: "" May 12 08:40:19.862: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 12 08:40:19.862: INFO: Waiting for all frontend pods to be Running. May 12 08:40:29.913: INFO: Waiting for frontend to serve content. May 12 08:40:30.664: INFO: Trying to add a new entry to the guestbook. May 12 08:40:30.824: INFO: Verifying that added entry can be retrieved. May 12 08:40:31.056: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources May 12 08:40:36.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:36.739: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:40:36.739: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 12 08:40:36.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:37.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:40:37.413: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 08:40:37.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:38.303: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:40:38.304: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 08:40:38.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:38.504: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:40:38.504: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 08:40:38.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:38.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:40:38.898: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 12 08:40:38.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6xbtj' May 12 08:40:39.477: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 08:40:39.477: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:40:39.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6xbtj" for this suite. May 12 08:41:23.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:41:23.968: INFO: namespace: e2e-tests-kubectl-6xbtj, resource: bindings, ignored listing per whitelist May 12 08:41:23.996: INFO: namespace e2e-tests-kubectl-6xbtj deletion completed in 44.433866928s • [SLOW TEST:70.626 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:41:23.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-5cd745bd-942c-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:41:24.608: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-l8lkc" to be "success or failure" May 12 08:41:24.647: INFO: Pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.386887ms May 12 08:41:26.652: INFO: Pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043663794s May 12 08:41:28.768: INFO: Pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15968872s May 12 08:41:30.772: INFO: Pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.163953455s May 12 08:41:32.776: INFO: Pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.168178403s STEP: Saw pod success May 12 08:41:32.776: INFO: Pod "pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:41:32.779: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 08:41:32.919: INFO: Waiting for pod pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c to disappear May 12 08:41:32.941: INFO: Pod pod-projected-secrets-5cdc7c0a-942c-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:41:32.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8lkc" for this suite. May 12 08:41:43.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:41:43.169: INFO: namespace: e2e-tests-projected-l8lkc, resource: bindings, ignored listing per whitelist May 12 08:41:43.197: INFO: namespace e2e-tests-projected-l8lkc deletion completed in 10.253109631s • [SLOW TEST:19.201 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:41:43.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:42:04.548: INFO: Container started at 2020-05-12 08:41:48 +0000 UTC, pod became ready at 2020-05-12 08:42:03 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:42:04.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-k6pdw" for this suite. May 12 08:42:26.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:42:26.645: INFO: namespace: e2e-tests-container-probe-k6pdw, resource: bindings, ignored listing per whitelist May 12 08:42:26.658: INFO: namespace e2e-tests-container-probe-k6pdw deletion completed in 22.105365274s • [SLOW TEST:43.460 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:42:26.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 08:42:38.902: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 08:42:38.929: INFO: Pod pod-with-prestop-http-hook still exists May 12 08:42:40.929: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 08:42:40.934: INFO: Pod pod-with-prestop-http-hook still exists May 12 08:42:42.929: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 08:42:42.933: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:42:42.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-g8tg8" for this suite. May 12 08:43:07.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:43:07.462: INFO: namespace: e2e-tests-container-lifecycle-hook-g8tg8, resource: bindings, ignored listing per whitelist May 12 08:43:07.465: INFO: namespace e2e-tests-container-lifecycle-hook-g8tg8 deletion completed in 24.523544406s • [SLOW TEST:40.807 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:43:07.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 08:43:14.499: INFO: 10 pods remaining May 12 08:43:14.499: INFO: 10 pods has nil DeletionTimestamp May 12 08:43:14.499: INFO: May 12 08:43:17.334: INFO: 0 pods remaining May 12 08:43:17.334: INFO: 0 pods has nil DeletionTimestamp May 12 08:43:17.334: INFO: May 12 08:43:18.024: INFO: 0 pods remaining May 12 08:43:18.024: INFO: 0 pods has nil DeletionTimestamp May 12 08:43:18.024: INFO: STEP: Gathering metrics W0512 08:43:18.462327 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 08:43:18.462: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:43:18.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-67gc7" for this suite. May 12 08:43:26.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:43:26.563: INFO: namespace: e2e-tests-gc-67gc7, resource: bindings, ignored listing per whitelist May 12 08:43:26.611: INFO: namespace e2e-tests-gc-67gc7 deletion completed in 8.146146244s • [SLOW TEST:19.146 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:43:26.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c May 12 08:43:27.035: INFO: Pod name my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c: Found 0 pods out of 1 May 12 08:43:32.039: INFO: Pod name my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c: Found 1 pods out of 1 May 12 08:43:32.039: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c" are running May 12 08:43:34.045: INFO: Pod "my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c-ptklg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:43:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:43:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:43:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 08:43:27 +0000 UTC Reason: Message:}]) May 12 08:43:34.046: INFO: Trying to dial the pod May 12 08:43:39.056: INFO: Controller my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c: Got expected result from replica 1 [my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c-ptklg]: "my-hostname-basic-a5d298b2-942c-11ea-bb6f-0242ac11001c-ptklg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:43:39.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-qhpn2" for this suite. May 12 08:43:47.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:43:47.494: INFO: namespace: e2e-tests-replication-controller-qhpn2, resource: bindings, ignored listing per whitelist May 12 08:43:47.535: INFO: namespace e2e-tests-replication-controller-qhpn2 deletion completed in 8.474979153s • [SLOW TEST:20.924 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:43:47.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-pxdgp/configmap-test-b2984af5-942c-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume configMaps May 12 08:43:49.005: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-configmap-pxdgp" to be "success or failure" May 12 08:43:49.361: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 356.376386ms May 12 08:43:51.366: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361114712s May 12 08:43:53.370: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365397536s May 12 08:43:55.374: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.369444899s May 12 08:43:57.475: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470496671s May 12 08:43:59.567: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.561937166s May 12 08:44:01.650: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.645302738s STEP: Saw pod success May 12 08:44:01.650: INFO: Pod "pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:44:01.654: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c container env-test: STEP: delete the pod May 12 08:44:02.256: INFO: Waiting for pod pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c to disappear May 12 08:44:02.330: INFO: Pod pod-configmaps-b2c47e2d-942c-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:44:02.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pxdgp" for this suite. May 12 08:44:10.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:44:11.730: INFO: namespace: e2e-tests-configmap-pxdgp, resource: bindings, ignored listing per whitelist May 12 08:44:11.761: INFO: namespace e2e-tests-configmap-pxdgp deletion completed in 9.427576537s • [SLOW TEST:24.226 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:44:11.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:44:12.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-zj2xt" to be "success or failure" May 12 08:44:13.013: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 65.873529ms May 12 08:44:15.452: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505162062s May 12 08:44:17.455: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.507707508s May 12 08:44:19.572: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624848431s May 12 08:44:21.754: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.807143821s May 12 08:44:24.202: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.254370603s May 12 08:44:26.236: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.289213638s STEP: Saw pod success May 12 08:44:26.237: INFO: Pod "downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:44:26.239: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:44:27.495: INFO: Waiting for pod downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c to disappear May 12 08:44:27.794: INFO: Pod downwardapi-volume-c112d084-942c-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:44:27.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zj2xt" for this suite. May 12 08:44:35.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:44:36.425: INFO: namespace: e2e-tests-projected-zj2xt, resource: bindings, ignored listing per whitelist May 12 08:44:36.447: INFO: namespace e2e-tests-projected-zj2xt deletion completed in 8.647919952s • [SLOW TEST:24.685 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:44:36.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 08:44:37.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:37.620: INFO: Number of nodes with available pods: 0 May 12 08:44:37.620: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:38.858: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:39.087: INFO: Number of nodes with available pods: 0 May 12 08:44:39.087: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:39.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:39.636: INFO: Number of nodes with available pods: 0 May 12 08:44:39.636: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:40.885: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:41.058: INFO: Number of nodes with available pods: 0 May 12 08:44:41.058: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:41.932: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:41.936: INFO: Number of nodes with available pods: 0 May 12 08:44:41.936: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:42.625: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:42.628: INFO: Number of nodes with available pods: 0 May 12 08:44:42.628: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:43.778: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:43.782: INFO: Number of nodes with available pods: 0 May 12 08:44:43.782: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:44.837: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:44.839: INFO: Number of nodes with available pods: 0 May 12 08:44:44.839: INFO: Node hunter-worker is running more than one daemon pod May 12 08:44:45.662: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:45.885: INFO: Number of nodes with available pods: 1 May 12 08:44:45.885: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:46.916: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:46.921: INFO: Number of nodes with available pods: 2 May 12 08:44:46.921: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 12 08:44:47.765: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:47.815: INFO: Number of nodes with available pods: 1 May 12 08:44:47.815: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:48.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:48.823: INFO: Number of nodes with available pods: 1 May 12 08:44:48.823: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:50.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:50.156: INFO: Number of nodes with available pods: 1 May 12 08:44:50.156: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:50.819: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:50.823: INFO: Number of nodes with available pods: 1 May 12 08:44:50.823: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:51.940: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:51.952: INFO: Number of nodes with available pods: 1 May 12 08:44:51.952: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:52.819: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:52.822: INFO: Number of nodes with available pods: 1 May 12 08:44:52.822: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:53.821: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:53.824: INFO: Number of nodes with available pods: 1 May 12 08:44:53.824: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:54.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:54.822: INFO: Number of nodes with available pods: 1 May 12 08:44:54.822: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:55.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:55.824: INFO: Number of nodes with available pods: 1 May 12 08:44:55.824: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:56.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:56.823: INFO: Number of nodes with available pods: 1 May 12 08:44:56.823: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:57.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:57.824: INFO: Number of nodes with available pods: 1 May 12 08:44:57.824: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:58.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:58.824: INFO: Number of nodes with available pods: 1 May 12 08:44:58.824: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:44:59.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:44:59.823: INFO: Number of nodes with available pods: 1 May 12 08:44:59.823: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:00.837: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:00.841: INFO: Number of nodes with available pods: 1 May 12 08:45:00.841: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:01.927: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:01.971: INFO: Number of nodes with available pods: 1 May 12 08:45:01.971: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:02.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:02.827: INFO: Number of nodes with available pods: 1 May 12 08:45:02.827: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:03.885: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:03.888: INFO: Number of nodes with available pods: 1 May 12 08:45:03.888: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:04.819: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:04.822: INFO: Number of nodes with available pods: 1 May 12 08:45:04.822: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:05.824: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:05.827: INFO: Number of nodes with available pods: 1 May 12 08:45:05.827: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:45:06.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:45:06.823: INFO: Number of nodes with available pods: 2 May 12 08:45:06.823: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9487h, will wait for the garbage collector to delete the pods May 12 08:45:06.885: INFO: Deleting DaemonSet.extensions daemon-set took: 6.86939ms May 12 08:45:07.085: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.273903ms May 12 08:45:21.806: INFO: Number of nodes with available pods: 0 May 12 08:45:21.806: INFO: Number of running nodes: 0, number of available pods: 0 May 12 08:45:21.810: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9487h/daemonsets","resourceVersion":"10128645"},"items":null} May 12 08:45:21.812: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9487h/pods","resourceVersion":"10128645"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:45:21.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-9487h" for this suite. May 12 08:45:27.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:45:27.890: INFO: namespace: e2e-tests-daemonsets-9487h, resource: bindings, ignored listing per whitelist May 12 08:45:27.910: INFO: namespace e2e-tests-daemonsets-9487h deletion completed in 6.086372786s • [SLOW TEST:51.463 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:45:27.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-edfa23ca-942c-11ea-bb6f-0242ac11001c STEP: Creating secret with name s-test-opt-upd-edfa2478-942c-11ea-bb6f-0242ac11001c STEP: Creating the pod STEP: Deleting secret s-test-opt-del-edfa23ca-942c-11ea-bb6f-0242ac11001c STEP: Updating secret s-test-opt-upd-edfa2478-942c-11ea-bb6f-0242ac11001c STEP: Creating secret with name s-test-opt-create-edfa24bf-942c-11ea-bb6f-0242ac11001c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:45:36.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lmfrf" for this suite. May 12 08:46:02.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:46:02.252: INFO: namespace: e2e-tests-secrets-lmfrf, resource: bindings, ignored listing per whitelist May 12 08:46:02.288: INFO: namespace e2e-tests-secrets-lmfrf deletion completed in 26.12249429s • [SLOW TEST:34.378 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:46:02.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:46:02.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-g879g" for this suite. May 12 08:46:08.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:46:08.711: INFO: namespace: e2e-tests-kubelet-test-g879g, resource: bindings, ignored listing per whitelist May 12 08:46:08.751: INFO: namespace e2e-tests-kubelet-test-g879g deletion completed in 6.222584179s • [SLOW TEST:6.463 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:46:08.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0512 08:46:50.083037 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 08:46:50.083: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:46:50.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mg2lp" for this suite. May 12 08:47:02.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:47:02.226: INFO: namespace: e2e-tests-gc-mg2lp, resource: bindings, ignored listing per whitelist May 12 08:47:02.287: INFO: namespace e2e-tests-gc-mg2lp deletion completed in 12.200628328s • [SLOW TEST:53.536 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:47:02.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-7fngv May 12 08:47:13.115: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-7fngv STEP: checking the pod's current state and verifying that restartCount is present May 12 08:47:13.118: INFO: Initial restart count of pod liveness-http is 0 May 12 08:47:33.602: INFO: Restart count of pod e2e-tests-container-probe-7fngv/liveness-http is now 1 (20.483455858s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:47:33.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7fngv" for this suite. May 12 08:47:44.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:47:45.481: INFO: namespace: e2e-tests-container-probe-7fngv, resource: bindings, ignored listing per whitelist May 12 08:47:45.509: INFO: namespace e2e-tests-container-probe-7fngv deletion completed in 11.30963398s • [SLOW TEST:43.222 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:47:45.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 08:47:45.747: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:45.749: INFO: Number of nodes with available pods: 0 May 12 08:47:45.749: INFO: Node hunter-worker is running more than one daemon pod May 12 08:47:46.754: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:46.757: INFO: Number of nodes with available pods: 0 May 12 08:47:46.757: INFO: Node hunter-worker is running more than one daemon pod May 12 08:47:47.754: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:47.758: INFO: Number of nodes with available pods: 0 May 12 08:47:47.759: INFO: Node hunter-worker is running more than one daemon pod May 12 08:47:48.996: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:48.999: INFO: Number of nodes with available pods: 0 May 12 08:47:48.999: INFO: Node hunter-worker is running more than one daemon pod May 12 08:47:49.863: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:49.865: INFO: Number of nodes with available pods: 0 May 12 08:47:49.865: INFO: Node hunter-worker is running more than one daemon pod May 12 08:47:50.923: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:50.926: INFO: Number of nodes with available pods: 0 May 12 08:47:50.926: INFO: Node hunter-worker is running more than one daemon pod May 12 08:47:52.268: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:52.618: INFO: Number of nodes with available pods: 1 May 12 08:47:52.618: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:47:53.093: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:53.096: INFO: Number of nodes with available pods: 1 May 12 08:47:53.096: INFO: Node hunter-worker2 is running more than one daemon pod May 12 08:47:54.027: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:54.049: INFO: Number of nodes with available pods: 2 May 12 08:47:54.049: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 12 08:47:54.110: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 08:47:54.617: INFO: Number of nodes with available pods: 2 May 12 08:47:54.617: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-md5dp, will wait for the garbage collector to delete the pods May 12 08:47:57.595: INFO: Deleting DaemonSet.extensions daemon-set took: 741.055694ms May 12 08:47:57.995: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.213645ms May 12 08:48:11.411: INFO: Number of nodes with available pods: 0 May 12 08:48:11.411: INFO: Number of running nodes: 0, number of available pods: 0 May 12 08:48:11.414: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-md5dp/daemonsets","resourceVersion":"10129303"},"items":null} May 12 08:48:11.416: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-md5dp/pods","resourceVersion":"10129303"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:48:11.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-md5dp" for this suite. May 12 08:48:24.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:48:24.408: INFO: namespace: e2e-tests-daemonsets-md5dp, resource: bindings, ignored listing per whitelist May 12 08:48:24.450: INFO: namespace e2e-tests-daemonsets-md5dp deletion completed in 13.022804027s • [SLOW TEST:38.940 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:48:24.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 12 08:48:24.564: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:48:24.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dvwwg" for this suite. May 12 08:48:30.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:48:30.993: INFO: namespace: e2e-tests-kubectl-dvwwg, resource: bindings, ignored listing per whitelist May 12 08:48:30.993: INFO: namespace e2e-tests-kubectl-dvwwg deletion completed in 6.327449859s • [SLOW TEST:6.543 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:48:30.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 08:48:31.522: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5b398022-942d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0009d2e7a), BlockOwnerDeletion:(*bool)(0xc0009d2e7b)}} May 12 08:48:31.535: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5b358155-942d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001289e92), BlockOwnerDeletion:(*bool)(0xc001289e93)}} May 12 08:48:31.754: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5b35f277-942d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc000e2bf12), BlockOwnerDeletion:(*bool)(0xc000e2bf13)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:48:37.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dtvph" for this suite. May 12 08:48:43.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:48:43.297: INFO: namespace: e2e-tests-gc-dtvph, resource: bindings, ignored listing per whitelist May 12 08:48:43.306: INFO: namespace e2e-tests-gc-dtvph deletion completed in 6.151499003s • [SLOW TEST:12.313 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:48:43.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 08:48:43.395: INFO: PodSpec: initContainers in spec.initContainers May 12 08:49:41.644: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-626cd0ff-942d-11ea-bb6f-0242ac11001c", GenerateName:"", Namespace:"e2e-tests-init-container-2brfh", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-2brfh/pods/pod-init-626cd0ff-942d-11ea-bb6f-0242ac11001c", UID:"62799085-942d-11ea-99e8-0242ac110002", ResourceVersion:"10129584", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724870123, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"395923521"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rnl8l", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00252b2c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rnl8l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rnl8l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rnl8l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002134e28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001cb5e60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002134eb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002134ed0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002134ed8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002134edc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724870123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724870123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724870123, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724870123, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.191", StartTime:(*v1.Time)(0xc0025949c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001530af0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001530b60)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://137a571c986a6d4fadd63a50df2ea88d1c823b2d92bae07837f32ad7f1015300"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002594a00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025949e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:49:41.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2brfh" for this suite. May 12 08:50:03.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:50:03.829: INFO: namespace: e2e-tests-init-container-2brfh, resource: bindings, ignored listing per whitelist May 12 08:50:03.881: INFO: namespace e2e-tests-init-container-2brfh deletion completed in 22.196313997s • [SLOW TEST:80.575 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:50:03.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 12 08:50:04.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-7nlqt" to be "success or failure" May 12 08:50:04.115: INFO: Pod "downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173797ms May 12 08:50:06.205: INFO: Pod "downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093598837s May 12 08:50:08.209: INFO: Pod "downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097085108s May 12 08:50:10.212: INFO: Pod "downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099981917s STEP: Saw pod success May 12 08:50:10.212: INFO: Pod "downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:50:10.214: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c container client-container: STEP: delete the pod May 12 08:50:10.259: INFO: Waiting for pod downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c to disappear May 12 08:50:10.264: INFO: Pod downwardapi-volume-92848ce4-942d-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:50:10.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7nlqt" for this suite. May 12 08:50:16.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:50:16.327: INFO: namespace: e2e-tests-projected-7nlqt, resource: bindings, ignored listing per whitelist May 12 08:50:16.351: INFO: namespace e2e-tests-projected-7nlqt deletion completed in 6.083714684s • [SLOW TEST:12.469 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:50:16.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 08:50:16.822: INFO: Waiting up to 5m0s for pod "pod-9a1719c8-942d-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-5fmh4" to be "success or failure" May 12 08:50:16.825: INFO: Pod "pod-9a1719c8-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.97007ms May 12 08:50:18.829: INFO: Pod "pod-9a1719c8-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007015156s May 12 08:50:20.870: INFO: Pod "pod-9a1719c8-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048305864s May 12 08:50:22.874: INFO: Pod "pod-9a1719c8-942d-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051515577s STEP: Saw pod success May 12 08:50:22.874: INFO: Pod "pod-9a1719c8-942d-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:50:22.876: INFO: Trying to get logs from node hunter-worker2 pod pod-9a1719c8-942d-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:50:22.943: INFO: Waiting for pod pod-9a1719c8-942d-11ea-bb6f-0242ac11001c to disappear May 12 08:50:22.950: INFO: Pod pod-9a1719c8-942d-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:50:22.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5fmh4" for this suite. May 12 08:50:28.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:50:29.021: INFO: namespace: e2e-tests-emptydir-5fmh4, resource: bindings, ignored listing per whitelist May 12 08:50:29.053: INFO: namespace e2e-tests-emptydir-5fmh4 deletion completed in 6.098780876s • [SLOW TEST:12.703 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:50:29.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 08:50:29.474: INFO: Waiting up to 5m0s for pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-2wbr4" to be "success or failure" May 12 08:50:29.572: INFO: Pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 97.181028ms May 12 08:50:31.766: INFO: Pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292042463s May 12 08:50:33.870: INFO: Pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396011358s May 12 08:50:36.273: INFO: Pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.798184583s May 12 08:50:38.553: INFO: Pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.079161202s STEP: Saw pod success May 12 08:50:38.554: INFO: Pod "downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:50:38.556: INFO: Trying to get logs from node hunter-worker pod downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 08:50:39.042: INFO: Waiting for pod downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c to disappear May 12 08:50:39.085: INFO: Pod downward-api-a1a2b349-942d-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:50:39.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2wbr4" for this suite. May 12 08:50:45.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:50:45.698: INFO: namespace: e2e-tests-downward-api-2wbr4, resource: bindings, ignored listing per whitelist May 12 08:50:45.704: INFO: namespace e2e-tests-downward-api-2wbr4 deletion completed in 6.616086706s • [SLOW TEST:16.651 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:50:45.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 08:50:53.110: INFO: Successfully updated pod "pod-update-aba3c1fe-942d-11ea-bb6f-0242ac11001c" STEP: verifying the updated pod is in kubernetes May 12 08:50:53.116: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:50:53.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mtxzs" for this suite. May 12 08:51:15.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:51:15.165: INFO: namespace: e2e-tests-pods-mtxzs, resource: bindings, ignored listing per whitelist May 12 08:51:15.194: INFO: namespace e2e-tests-pods-mtxzs deletion completed in 22.074516917s • [SLOW TEST:29.489 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:51:15.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 12 08:51:15.739: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 08:51:15.833: INFO: Waiting for terminating namespaces to be deleted... May 12 08:51:15.836: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 12 08:51:15.851: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 08:51:15.851: INFO: Container coredns ready: true, restart count 0 May 12 08:51:15.851: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 12 08:51:15.851: INFO: Container kube-proxy ready: true, restart count 0 May 12 08:51:15.851: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:51:15.851: INFO: Container kindnet-cni ready: true, restart count 0 May 12 08:51:15.851: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 12 08:51:15.855: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:51:15.855: INFO: Container kube-proxy ready: true, restart count 0 May 12 08:51:15.855: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 12 08:51:15.855: INFO: Container kindnet-cni ready: true, restart count 0 May 12 08:51:15.855: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 12 08:51:15.855: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e3bce541a9872], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:51:16.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-zlbzb" for this suite. May 12 08:51:25.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:51:25.116: INFO: namespace: e2e-tests-sched-pred-zlbzb, resource: bindings, ignored listing per whitelist May 12 08:51:25.128: INFO: namespace e2e-tests-sched-pred-zlbzb deletion completed in 8.250712553s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:9.934 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:51:25.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 08:51:25.976: INFO: Waiting up to 5m0s for pod "pod-c3325a1f-942d-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-zxn4p" to be "success or failure" May 12 08:51:26.291: INFO: Pod "pod-c3325a1f-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 315.046245ms May 12 08:51:28.294: INFO: Pod "pod-c3325a1f-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31799008s May 12 08:51:30.298: INFO: Pod "pod-c3325a1f-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322040679s May 12 08:51:32.530: INFO: Pod "pod-c3325a1f-942d-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.553353182s STEP: Saw pod success May 12 08:51:32.530: INFO: Pod "pod-c3325a1f-942d-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:51:32.533: INFO: Trying to get logs from node hunter-worker2 pod pod-c3325a1f-942d-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 08:51:32.655: INFO: Waiting for pod pod-c3325a1f-942d-11ea-bb6f-0242ac11001c to disappear May 12 08:51:32.671: INFO: Pod pod-c3325a1f-942d-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:51:32.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zxn4p" for this suite. May 12 08:51:38.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:51:38.819: INFO: namespace: e2e-tests-emptydir-zxn4p, resource: bindings, ignored listing per whitelist May 12 08:51:38.848: INFO: namespace e2e-tests-emptydir-zxn4p deletion completed in 6.173623687s • [SLOW TEST:13.720 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:51:38.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-cb41bec4-942d-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:51:39.314: INFO: Waiting up to 5m0s for pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-jqg2r" to be "success or failure" May 12 08:51:39.422: INFO: Pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 107.779812ms May 12 08:51:41.426: INFO: Pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111703163s May 12 08:51:43.430: INFO: Pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115710813s May 12 08:51:45.434: INFO: Pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.119794356s May 12 08:51:47.436: INFO: Pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122428188s STEP: Saw pod success May 12 08:51:47.436: INFO: Pod "pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:51:47.439: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c container secret-env-test: STEP: delete the pod May 12 08:51:47.513: INFO: Waiting for pod pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c to disappear May 12 08:51:47.530: INFO: Pod pod-secrets-cb4403a5-942d-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:51:47.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jqg2r" for this suite. May 12 08:51:53.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:51:53.592: INFO: namespace: e2e-tests-secrets-jqg2r, resource: bindings, ignored listing per whitelist May 12 08:51:53.786: INFO: namespace e2e-tests-secrets-jqg2r deletion completed in 6.253265899s • [SLOW TEST:14.938 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:51:53.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 12 08:51:54.376: INFO: Waiting up to 5m0s for pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-var-expansion-xd42f" to be "success or failure" May 12 08:51:54.392: INFO: Pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.824512ms May 12 08:51:56.555: INFO: Pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178652386s May 12 08:51:58.572: INFO: Pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195911198s May 12 08:52:00.686: INFO: Pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.310098728s May 12 08:52:02.690: INFO: Pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.313872306s STEP: Saw pod success May 12 08:52:02.690: INFO: Pod "var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:52:02.692: INFO: Trying to get logs from node hunter-worker pod var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 08:52:02.808: INFO: Waiting for pod var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c to disappear May 12 08:52:02.834: INFO: Pod var-expansion-d43d60af-942d-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:52:02.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xd42f" for this suite. May 12 08:52:08.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:52:08.922: INFO: namespace: e2e-tests-var-expansion-xd42f, resource: bindings, ignored listing per whitelist May 12 08:52:08.932: INFO: namespace e2e-tests-var-expansion-xd42f deletion completed in 6.094552396s • [SLOW TEST:15.146 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:52:08.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-rdwsc May 12 08:52:16.382: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-rdwsc STEP: checking the pod's current state and verifying that restartCount is present May 12 08:52:16.704: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:56:16.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rdwsc" for this suite. May 12 08:56:23.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:56:23.738: INFO: namespace: e2e-tests-container-probe-rdwsc, resource: bindings, ignored listing per whitelist May 12 08:56:23.754: INFO: namespace e2e-tests-container-probe-rdwsc deletion completed in 6.678971115s • [SLOW TEST:254.821 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:56:23.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-mfrd STEP: Creating a pod to test atomic-volume-subpath May 12 08:56:24.975: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mfrd" in namespace "e2e-tests-subpath-x6f9f" to be "success or failure" May 12 08:56:24.978: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.572718ms May 12 08:56:27.019: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043761622s May 12 08:56:29.193: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217252804s May 12 08:56:31.348: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373018908s May 12 08:56:33.353: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377037104s May 12 08:56:35.357: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.381035464s May 12 08:56:37.420: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=true. Elapsed: 12.444326564s May 12 08:56:39.810: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 14.834247428s May 12 08:56:41.955: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 16.979365717s May 12 08:56:43.959: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 18.983075802s May 12 08:56:45.962: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 20.986507769s May 12 08:56:47.965: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 22.989547121s May 12 08:56:49.969: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 24.993902169s May 12 08:56:51.973: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 26.997747227s May 12 08:56:53.977: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Running", Reason="", readiness=false. Elapsed: 29.001510134s May 12 08:56:56.225: INFO: Pod "pod-subpath-test-configmap-mfrd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.249123885s STEP: Saw pod success May 12 08:56:56.225: INFO: Pod "pod-subpath-test-configmap-mfrd" satisfied condition "success or failure" May 12 08:56:56.228: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-mfrd container test-container-subpath-configmap-mfrd: STEP: delete the pod May 12 08:56:56.311: INFO: Waiting for pod pod-subpath-test-configmap-mfrd to disappear May 12 08:56:56.342: INFO: Pod pod-subpath-test-configmap-mfrd no longer exists STEP: Deleting pod pod-subpath-test-configmap-mfrd May 12 08:56:56.342: INFO: Deleting pod "pod-subpath-test-configmap-mfrd" in namespace "e2e-tests-subpath-x6f9f" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:56:56.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x6f9f" for this suite. May 12 08:57:04.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:57:04.586: INFO: namespace: e2e-tests-subpath-x6f9f, resource: bindings, ignored listing per whitelist May 12 08:57:04.640: INFO: namespace e2e-tests-subpath-x6f9f deletion completed in 8.294432846s • [SLOW TEST:40.887 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:57:04.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-z6tfr [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-z6tfr STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-z6tfr May 12 08:57:04.829: INFO: Found 0 stateful pods, waiting for 1 May 12 08:57:14.834: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 08:57:14.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:57:15.181: INFO: stderr: "I0512 08:57:14.956110 2843 log.go:172] (0xc000138840) (0xc00067d400) Create stream\nI0512 08:57:14.956178 2843 log.go:172] (0xc000138840) (0xc00067d400) Stream added, broadcasting: 1\nI0512 08:57:14.958184 2843 log.go:172] (0xc000138840) Reply frame received for 1\nI0512 08:57:14.958211 2843 log.go:172] (0xc000138840) (0xc00067d4a0) Create stream\nI0512 08:57:14.958217 2843 log.go:172] (0xc000138840) (0xc00067d4a0) Stream added, broadcasting: 3\nI0512 08:57:14.958994 2843 log.go:172] (0xc000138840) Reply frame received for 3\nI0512 08:57:14.959020 2843 log.go:172] (0xc000138840) (0xc000762000) Create stream\nI0512 08:57:14.959029 2843 log.go:172] (0xc000138840) (0xc000762000) Stream added, broadcasting: 5\nI0512 08:57:14.959763 2843 log.go:172] (0xc000138840) Reply frame received for 5\nI0512 08:57:15.175280 2843 log.go:172] (0xc000138840) Data frame received for 3\nI0512 08:57:15.175309 2843 log.go:172] (0xc00067d4a0) (3) Data frame handling\nI0512 08:57:15.175324 2843 log.go:172] (0xc00067d4a0) (3) Data frame sent\nI0512 08:57:15.175335 2843 log.go:172] (0xc000138840) Data frame received for 3\nI0512 08:57:15.175342 2843 log.go:172] (0xc00067d4a0) (3) Data frame handling\nI0512 08:57:15.175962 2843 log.go:172] (0xc000138840) Data frame received for 5\nI0512 08:57:15.175976 2843 log.go:172] (0xc000762000) (5) Data frame handling\nI0512 08:57:15.177244 2843 log.go:172] (0xc000138840) Data frame received for 1\nI0512 08:57:15.177271 2843 log.go:172] (0xc00067d400) (1) Data frame handling\nI0512 08:57:15.177282 2843 log.go:172] (0xc00067d400) (1) Data frame sent\nI0512 08:57:15.177294 2843 log.go:172] (0xc000138840) (0xc00067d400) Stream removed, broadcasting: 1\nI0512 08:57:15.177318 2843 log.go:172] (0xc000138840) Go away received\nI0512 08:57:15.177522 2843 log.go:172] (0xc000138840) (0xc00067d400) Stream removed, broadcasting: 1\nI0512 08:57:15.177536 2843 log.go:172] (0xc000138840) (0xc00067d4a0) Stream removed, broadcasting: 3\nI0512 08:57:15.177550 2843 log.go:172] (0xc000138840) (0xc000762000) Stream removed, broadcasting: 5\n" May 12 08:57:15.181: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:57:15.181: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:57:15.184: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 08:57:25.655: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 08:57:25.655: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:57:26.541: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999236s May 12 08:57:27.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.820439439s May 12 08:57:28.851: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.816226294s May 12 08:57:29.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.510179049s May 12 08:57:30.860: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.505531735s May 12 08:57:31.863: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.502027127s May 12 08:57:32.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.49825043s May 12 08:57:33.873: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.493245637s May 12 08:57:34.936: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.488553768s May 12 08:57:36.020: INFO: Verifying statefulset ss doesn't scale past 1 for another 425.727728ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-z6tfr May 12 08:57:37.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:57:37.222: INFO: stderr: "I0512 08:57:37.157971 2866 log.go:172] (0xc00013a6e0) (0xc000677220) Create stream\nI0512 08:57:37.158051 2866 log.go:172] (0xc00013a6e0) (0xc000677220) Stream added, broadcasting: 1\nI0512 08:57:37.160131 2866 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0512 08:57:37.160164 2866 log.go:172] (0xc00013a6e0) (0xc000732000) Create stream\nI0512 08:57:37.160175 2866 log.go:172] (0xc00013a6e0) (0xc000732000) Stream added, broadcasting: 3\nI0512 08:57:37.161248 2866 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0512 08:57:37.161300 2866 log.go:172] (0xc00013a6e0) (0xc000410000) Create stream\nI0512 08:57:37.161309 2866 log.go:172] (0xc00013a6e0) (0xc000410000) Stream added, broadcasting: 5\nI0512 08:57:37.162559 2866 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0512 08:57:37.216554 2866 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0512 08:57:37.216579 2866 log.go:172] (0xc000410000) (5) Data frame handling\nI0512 08:57:37.216614 2866 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0512 08:57:37.216643 2866 log.go:172] (0xc000732000) (3) Data frame handling\nI0512 08:57:37.216660 2866 log.go:172] (0xc000732000) (3) Data frame sent\nI0512 08:57:37.216673 2866 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0512 08:57:37.216700 2866 log.go:172] (0xc000732000) (3) Data frame handling\nI0512 08:57:37.218403 2866 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0512 08:57:37.218424 2866 log.go:172] (0xc000677220) (1) Data frame handling\nI0512 08:57:37.218446 2866 log.go:172] (0xc000677220) (1) Data frame sent\nI0512 08:57:37.218460 2866 log.go:172] (0xc00013a6e0) (0xc000677220) Stream removed, broadcasting: 1\nI0512 08:57:37.218518 2866 log.go:172] (0xc00013a6e0) Go away received\nI0512 08:57:37.218731 2866 log.go:172] (0xc00013a6e0) (0xc000677220) Stream removed, broadcasting: 1\nI0512 08:57:37.218759 2866 log.go:172] (0xc00013a6e0) (0xc000732000) Stream removed, broadcasting: 3\nI0512 08:57:37.218788 2866 log.go:172] (0xc00013a6e0) (0xc000410000) Stream removed, broadcasting: 5\n" May 12 08:57:37.222: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:57:37.222: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:57:37.225: INFO: Found 1 stateful pods, waiting for 3 May 12 08:57:47.250: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 08:57:47.250: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 08:57:47.250: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 08:57:57.229: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 08:57:57.229: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 08:57:57.229: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 08:57:57.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:57:57.487: INFO: stderr: "I0512 08:57:57.358659 2889 log.go:172] (0xc0007de370) (0xc000714640) Create stream\nI0512 08:57:57.358715 2889 log.go:172] (0xc0007de370) (0xc000714640) Stream added, broadcasting: 1\nI0512 08:57:57.360698 2889 log.go:172] (0xc0007de370) Reply frame received for 1\nI0512 08:57:57.360745 2889 log.go:172] (0xc0007de370) (0xc0005dad20) Create stream\nI0512 08:57:57.360768 2889 log.go:172] (0xc0007de370) (0xc0005dad20) Stream added, broadcasting: 3\nI0512 08:57:57.362095 2889 log.go:172] (0xc0007de370) Reply frame received for 3\nI0512 08:57:57.362134 2889 log.go:172] (0xc0007de370) (0xc0005dae60) Create stream\nI0512 08:57:57.362148 2889 log.go:172] (0xc0007de370) (0xc0005dae60) Stream added, broadcasting: 5\nI0512 08:57:57.362951 2889 log.go:172] (0xc0007de370) Reply frame received for 5\nI0512 08:57:57.483232 2889 log.go:172] (0xc0007de370) Data frame received for 5\nI0512 08:57:57.483251 2889 log.go:172] (0xc0005dae60) (5) Data frame handling\nI0512 08:57:57.483282 2889 log.go:172] (0xc0007de370) Data frame received for 3\nI0512 08:57:57.483308 2889 log.go:172] (0xc0005dad20) (3) Data frame handling\nI0512 08:57:57.483323 2889 log.go:172] (0xc0005dad20) (3) Data frame sent\nI0512 08:57:57.483333 2889 log.go:172] (0xc0007de370) Data frame received for 3\nI0512 08:57:57.483341 2889 log.go:172] (0xc0005dad20) (3) Data frame handling\nI0512 08:57:57.484580 2889 log.go:172] (0xc0007de370) Data frame received for 1\nI0512 08:57:57.484603 2889 log.go:172] (0xc000714640) (1) Data frame handling\nI0512 08:57:57.484617 2889 log.go:172] (0xc000714640) (1) Data frame sent\nI0512 08:57:57.484642 2889 log.go:172] (0xc0007de370) (0xc000714640) Stream removed, broadcasting: 1\nI0512 08:57:57.484663 2889 log.go:172] (0xc0007de370) Go away received\nI0512 08:57:57.484795 2889 log.go:172] (0xc0007de370) (0xc000714640) Stream removed, broadcasting: 1\nI0512 08:57:57.484808 2889 log.go:172] (0xc0007de370) (0xc0005dad20) Stream removed, broadcasting: 3\nI0512 08:57:57.484815 2889 log.go:172] (0xc0007de370) (0xc0005dae60) Stream removed, broadcasting: 5\n" May 12 08:57:57.487: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:57:57.487: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:57:57.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:57:57.919: INFO: stderr: "I0512 08:57:57.603782 2912 log.go:172] (0xc00013a630) (0xc0001f9360) Create stream\nI0512 08:57:57.603844 2912 log.go:172] (0xc00013a630) (0xc0001f9360) Stream added, broadcasting: 1\nI0512 08:57:57.606932 2912 log.go:172] (0xc00013a630) Reply frame received for 1\nI0512 08:57:57.606987 2912 log.go:172] (0xc00013a630) (0xc0005c2000) Create stream\nI0512 08:57:57.607007 2912 log.go:172] (0xc00013a630) (0xc0005c2000) Stream added, broadcasting: 3\nI0512 08:57:57.608030 2912 log.go:172] (0xc00013a630) Reply frame received for 3\nI0512 08:57:57.608057 2912 log.go:172] (0xc00013a630) (0xc000118000) Create stream\nI0512 08:57:57.608067 2912 log.go:172] (0xc00013a630) (0xc000118000) Stream added, broadcasting: 5\nI0512 08:57:57.609008 2912 log.go:172] (0xc00013a630) Reply frame received for 5\nI0512 08:57:57.911355 2912 log.go:172] (0xc00013a630) Data frame received for 3\nI0512 08:57:57.911400 2912 log.go:172] (0xc0005c2000) (3) Data frame handling\nI0512 08:57:57.911418 2912 log.go:172] (0xc0005c2000) (3) Data frame sent\nI0512 08:57:57.911828 2912 log.go:172] (0xc00013a630) Data frame received for 3\nI0512 08:57:57.911847 2912 log.go:172] (0xc0005c2000) (3) Data frame handling\nI0512 08:57:57.911880 2912 log.go:172] (0xc00013a630) Data frame received for 5\nI0512 08:57:57.911890 2912 log.go:172] (0xc000118000) (5) Data frame handling\nI0512 08:57:57.915498 2912 log.go:172] (0xc00013a630) Data frame received for 1\nI0512 08:57:57.915539 2912 log.go:172] (0xc0001f9360) (1) Data frame handling\nI0512 08:57:57.915572 2912 log.go:172] (0xc0001f9360) (1) Data frame sent\nI0512 08:57:57.915609 2912 log.go:172] (0xc00013a630) (0xc0001f9360) Stream removed, broadcasting: 1\nI0512 08:57:57.915651 2912 log.go:172] (0xc00013a630) Go away received\nI0512 08:57:57.915864 2912 log.go:172] (0xc00013a630) (0xc0001f9360) Stream removed, broadcasting: 1\nI0512 08:57:57.915883 2912 log.go:172] (0xc00013a630) (0xc0005c2000) Stream removed, broadcasting: 3\nI0512 08:57:57.915894 2912 log.go:172] (0xc00013a630) (0xc000118000) Stream removed, broadcasting: 5\n" May 12 08:57:57.919: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:57:57.919: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:57:57.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 12 08:57:58.890: INFO: stderr: "I0512 08:57:58.490896 2935 log.go:172] (0xc00015c840) (0xc0006154a0) Create stream\nI0512 08:57:58.490953 2935 log.go:172] (0xc00015c840) (0xc0006154a0) Stream added, broadcasting: 1\nI0512 08:57:58.492914 2935 log.go:172] (0xc00015c840) Reply frame received for 1\nI0512 08:57:58.492945 2935 log.go:172] (0xc00015c840) (0xc000418000) Create stream\nI0512 08:57:58.492957 2935 log.go:172] (0xc00015c840) (0xc000418000) Stream added, broadcasting: 3\nI0512 08:57:58.494007 2935 log.go:172] (0xc00015c840) Reply frame received for 3\nI0512 08:57:58.494036 2935 log.go:172] (0xc00015c840) (0xc0004180a0) Create stream\nI0512 08:57:58.494043 2935 log.go:172] (0xc00015c840) (0xc0004180a0) Stream added, broadcasting: 5\nI0512 08:57:58.495006 2935 log.go:172] (0xc00015c840) Reply frame received for 5\nI0512 08:57:58.883153 2935 log.go:172] (0xc00015c840) Data frame received for 3\nI0512 08:57:58.883195 2935 log.go:172] (0xc000418000) (3) Data frame handling\nI0512 08:57:58.883217 2935 log.go:172] (0xc000418000) (3) Data frame sent\nI0512 08:57:58.883680 2935 log.go:172] (0xc00015c840) Data frame received for 5\nI0512 08:57:58.883709 2935 log.go:172] (0xc0004180a0) (5) Data frame handling\nI0512 08:57:58.883811 2935 log.go:172] (0xc00015c840) Data frame received for 3\nI0512 08:57:58.883966 2935 log.go:172] (0xc000418000) (3) Data frame handling\nI0512 08:57:58.886007 2935 log.go:172] (0xc00015c840) Data frame received for 1\nI0512 08:57:58.886046 2935 log.go:172] (0xc0006154a0) (1) Data frame handling\nI0512 08:57:58.886084 2935 log.go:172] (0xc0006154a0) (1) Data frame sent\nI0512 08:57:58.886106 2935 log.go:172] (0xc00015c840) (0xc0006154a0) Stream removed, broadcasting: 1\nI0512 08:57:58.886285 2935 log.go:172] (0xc00015c840) Go away received\nI0512 08:57:58.886366 2935 log.go:172] (0xc00015c840) (0xc0006154a0) Stream removed, broadcasting: 1\nI0512 08:57:58.886404 2935 log.go:172] (0xc00015c840) (0xc000418000) Stream removed, broadcasting: 3\nI0512 08:57:58.886420 2935 log.go:172] (0xc00015c840) (0xc0004180a0) Stream removed, broadcasting: 5\n" May 12 08:57:58.890: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 12 08:57:58.890: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 12 08:57:58.890: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:57:58.967: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 12 08:58:08.976: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 08:58:08.976: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 08:58:08.976: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 08:58:09.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999613s May 12 08:58:10.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.832775668s May 12 08:58:11.352: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.633991521s May 12 08:58:12.380: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.63007505s May 12 08:58:13.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.601769018s May 12 08:58:14.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.596194876s May 12 08:58:15.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.54126779s May 12 08:58:16.449: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.536763641s May 12 08:58:17.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.532507795s May 12 08:58:18.590: INFO: Verifying statefulset ss doesn't scale past 3 for another 527.753034ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-z6tfr May 12 08:58:19.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:58:20.211: INFO: stderr: "I0512 08:58:20.137680 2958 log.go:172] (0xc000138630) (0xc0004d7360) Create stream\nI0512 08:58:20.137753 2958 log.go:172] (0xc000138630) (0xc0004d7360) Stream added, broadcasting: 1\nI0512 08:58:20.140355 2958 log.go:172] (0xc000138630) Reply frame received for 1\nI0512 08:58:20.140406 2958 log.go:172] (0xc000138630) (0xc0004d7400) Create stream\nI0512 08:58:20.140418 2958 log.go:172] (0xc000138630) (0xc0004d7400) Stream added, broadcasting: 3\nI0512 08:58:20.141511 2958 log.go:172] (0xc000138630) Reply frame received for 3\nI0512 08:58:20.141559 2958 log.go:172] (0xc000138630) (0xc0002ee000) Create stream\nI0512 08:58:20.141576 2958 log.go:172] (0xc000138630) (0xc0002ee000) Stream added, broadcasting: 5\nI0512 08:58:20.142444 2958 log.go:172] (0xc000138630) Reply frame received for 5\nI0512 08:58:20.204575 2958 log.go:172] (0xc000138630) Data frame received for 5\nI0512 08:58:20.204608 2958 log.go:172] (0xc0002ee000) (5) Data frame handling\nI0512 08:58:20.204636 2958 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:58:20.204674 2958 log.go:172] (0xc0004d7400) (3) Data frame handling\nI0512 08:58:20.204694 2958 log.go:172] (0xc0004d7400) (3) Data frame sent\nI0512 08:58:20.204710 2958 log.go:172] (0xc000138630) Data frame received for 3\nI0512 08:58:20.204725 2958 log.go:172] (0xc0004d7400) (3) Data frame handling\nI0512 08:58:20.206412 2958 log.go:172] (0xc000138630) Data frame received for 1\nI0512 08:58:20.206431 2958 log.go:172] (0xc0004d7360) (1) Data frame handling\nI0512 08:58:20.206445 2958 log.go:172] (0xc0004d7360) (1) Data frame sent\nI0512 08:58:20.206463 2958 log.go:172] (0xc000138630) (0xc0004d7360) Stream removed, broadcasting: 1\nI0512 08:58:20.206480 2958 log.go:172] (0xc000138630) Go away received\nI0512 08:58:20.206714 2958 log.go:172] (0xc000138630) (0xc0004d7360) Stream removed, broadcasting: 1\nI0512 08:58:20.206732 2958 log.go:172] (0xc000138630) (0xc0004d7400) Stream removed, broadcasting: 3\nI0512 08:58:20.206741 2958 log.go:172] (0xc000138630) (0xc0002ee000) Stream removed, broadcasting: 5\n" May 12 08:58:20.211: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:58:20.211: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:58:20.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:58:20.458: INFO: stderr: "I0512 08:58:20.334522 2981 log.go:172] (0xc00011cd10) (0xc0006abae0) Create stream\nI0512 08:58:20.334633 2981 log.go:172] (0xc00011cd10) (0xc0006abae0) Stream added, broadcasting: 1\nI0512 08:58:20.340561 2981 log.go:172] (0xc00011cd10) Reply frame received for 1\nI0512 08:58:20.340704 2981 log.go:172] (0xc00011cd10) (0xc00082cb40) Create stream\nI0512 08:58:20.340775 2981 log.go:172] (0xc00011cd10) (0xc00082cb40) Stream added, broadcasting: 3\nI0512 08:58:20.343593 2981 log.go:172] (0xc00011cd10) Reply frame received for 3\nI0512 08:58:20.343657 2981 log.go:172] (0xc00011cd10) (0xc0006aae60) Create stream\nI0512 08:58:20.343673 2981 log.go:172] (0xc00011cd10) (0xc0006aae60) Stream added, broadcasting: 5\nI0512 08:58:20.344652 2981 log.go:172] (0xc00011cd10) Reply frame received for 5\nI0512 08:58:20.446100 2981 log.go:172] (0xc00011cd10) Data frame received for 3\nI0512 08:58:20.446152 2981 log.go:172] (0xc00082cb40) (3) Data frame handling\nI0512 08:58:20.446176 2981 log.go:172] (0xc00082cb40) (3) Data frame sent\nI0512 08:58:20.446193 2981 log.go:172] (0xc00011cd10) Data frame received for 3\nI0512 08:58:20.446206 2981 log.go:172] (0xc00082cb40) (3) Data frame handling\nI0512 08:58:20.446595 2981 log.go:172] (0xc00011cd10) Data frame received for 5\nI0512 08:58:20.446619 2981 log.go:172] (0xc0006aae60) (5) Data frame handling\nI0512 08:58:20.448359 2981 log.go:172] (0xc00011cd10) Data frame received for 1\nI0512 08:58:20.448375 2981 log.go:172] (0xc0006abae0) (1) Data frame handling\nI0512 08:58:20.448385 2981 log.go:172] (0xc0006abae0) (1) Data frame sent\nI0512 08:58:20.448396 2981 log.go:172] (0xc00011cd10) (0xc0006abae0) Stream removed, broadcasting: 1\nI0512 08:58:20.448568 2981 log.go:172] (0xc00011cd10) (0xc0006abae0) Stream removed, broadcasting: 1\nI0512 08:58:20.448586 2981 log.go:172] (0xc00011cd10) (0xc00082cb40) Stream removed, broadcasting: 3\nI0512 08:58:20.448757 2981 log.go:172] (0xc00011cd10) (0xc0006aae60) Stream removed, broadcasting: 5\n" May 12 08:58:20.458: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:58:20.458: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:58:20.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z6tfr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 12 08:58:20.764: INFO: stderr: "I0512 08:58:20.685955 3004 log.go:172] (0xc000162840) (0xc00074c640) Create stream\nI0512 08:58:20.686010 3004 log.go:172] (0xc000162840) (0xc00074c640) Stream added, broadcasting: 1\nI0512 08:58:20.687898 3004 log.go:172] (0xc000162840) Reply frame received for 1\nI0512 08:58:20.687938 3004 log.go:172] (0xc000162840) (0xc000660dc0) Create stream\nI0512 08:58:20.687955 3004 log.go:172] (0xc000162840) (0xc000660dc0) Stream added, broadcasting: 3\nI0512 08:58:20.688585 3004 log.go:172] (0xc000162840) Reply frame received for 3\nI0512 08:58:20.688619 3004 log.go:172] (0xc000162840) (0xc000660f00) Create stream\nI0512 08:58:20.688633 3004 log.go:172] (0xc000162840) (0xc000660f00) Stream added, broadcasting: 5\nI0512 08:58:20.689543 3004 log.go:172] (0xc000162840) Reply frame received for 5\nI0512 08:58:20.758048 3004 log.go:172] (0xc000162840) Data frame received for 5\nI0512 08:58:20.758104 3004 log.go:172] (0xc000660f00) (5) Data frame handling\nI0512 08:58:20.758140 3004 log.go:172] (0xc000162840) Data frame received for 3\nI0512 08:58:20.758156 3004 log.go:172] (0xc000660dc0) (3) Data frame handling\nI0512 08:58:20.758165 3004 log.go:172] (0xc000660dc0) (3) Data frame sent\nI0512 08:58:20.758183 3004 log.go:172] (0xc000162840) Data frame received for 3\nI0512 08:58:20.758190 3004 log.go:172] (0xc000660dc0) (3) Data frame handling\nI0512 08:58:20.759379 3004 log.go:172] (0xc000162840) Data frame received for 1\nI0512 08:58:20.759402 3004 log.go:172] (0xc00074c640) (1) Data frame handling\nI0512 08:58:20.759416 3004 log.go:172] (0xc00074c640) (1) Data frame sent\nI0512 08:58:20.759486 3004 log.go:172] (0xc000162840) (0xc00074c640) Stream removed, broadcasting: 1\nI0512 08:58:20.759578 3004 log.go:172] (0xc000162840) Go away received\nI0512 08:58:20.759938 3004 log.go:172] (0xc000162840) (0xc00074c640) Stream removed, broadcasting: 1\nI0512 08:58:20.759961 3004 log.go:172] (0xc000162840) (0xc000660dc0) Stream removed, broadcasting: 3\nI0512 08:58:20.759974 3004 log.go:172] (0xc000162840) (0xc000660f00) Stream removed, broadcasting: 5\n" May 12 08:58:20.764: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 12 08:58:20.764: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 12 08:58:20.764: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 12 08:58:50.837: INFO: Deleting all statefulset in ns e2e-tests-statefulset-z6tfr May 12 08:58:50.840: INFO: Scaling statefulset ss to 0 May 12 08:58:50.847: INFO: Waiting for statefulset status.replicas updated to 0 May 12 08:58:50.850: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:58:50.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-z6tfr" for this suite. May 12 08:59:00.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:59:01.048: INFO: namespace: e2e-tests-statefulset-z6tfr, resource: bindings, ignored listing per whitelist May 12 08:59:01.065: INFO: namespace e2e-tests-statefulset-z6tfr deletion completed in 10.15244694s • [SLOW TEST:116.425 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:59:01.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 12 08:59:03.099: INFO: created pod pod-service-account-defaultsa May 12 08:59:03.099: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 12 08:59:03.333: INFO: created pod pod-service-account-mountsa May 12 08:59:03.333: INFO: pod pod-service-account-mountsa service account token volume mount: true May 12 08:59:03.355: INFO: created pod pod-service-account-nomountsa May 12 08:59:03.355: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 12 08:59:03.430: INFO: created pod pod-service-account-defaultsa-mountspec May 12 08:59:03.430: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 12 08:59:03.533: INFO: created pod pod-service-account-mountsa-mountspec May 12 08:59:03.533: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 12 08:59:03.581: INFO: created pod pod-service-account-nomountsa-mountspec May 12 08:59:03.581: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 12 08:59:03.932: INFO: created pod pod-service-account-defaultsa-nomountspec May 12 08:59:03.932: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 12 08:59:04.017: INFO: created pod pod-service-account-mountsa-nomountspec May 12 08:59:04.017: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 12 08:59:04.135: INFO: created pod pod-service-account-nomountsa-nomountspec May 12 08:59:04.135: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:59:04.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-dz2nk" for this suite. May 12 08:59:42.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:59:42.611: INFO: namespace: e2e-tests-svcaccounts-dz2nk, resource: bindings, ignored listing per whitelist May 12 08:59:42.649: INFO: namespace e2e-tests-svcaccounts-dz2nk deletion completed in 38.453607276s • [SLOW TEST:41.584 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:59:42.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-ebbbb6b5-942e-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 08:59:43.754: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-projected-2l2zr" to be "success or failure" May 12 08:59:43.962: INFO: Pod "pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 208.35163ms May 12 08:59:45.991: INFO: Pod "pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237520643s May 12 08:59:47.995: INFO: Pod "pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240853026s May 12 08:59:49.999: INFO: Pod "pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.245047739s STEP: Saw pod success May 12 08:59:49.999: INFO: Pod "pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 08:59:50.001: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c container projected-secret-volume-test: STEP: delete the pod May 12 08:59:50.251: INFO: Waiting for pod pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c to disappear May 12 08:59:50.293: INFO: Pod pod-projected-secrets-ebc43cf6-942e-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 08:59:50.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2l2zr" for this suite. May 12 08:59:58.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 08:59:58.389: INFO: namespace: e2e-tests-projected-2l2zr, resource: bindings, ignored listing per whitelist May 12 08:59:58.395: INFO: namespace e2e-tests-projected-2l2zr deletion completed in 8.098155445s • [SLOW TEST:15.745 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 08:59:58.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:00:58.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pzj8h" for this suite. May 12 09:01:22.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:01:22.824: INFO: namespace: e2e-tests-container-probe-pzj8h, resource: bindings, ignored listing per whitelist May 12 09:01:22.851: INFO: namespace e2e-tests-container-probe-pzj8h deletion completed in 24.216024831s • [SLOW TEST:84.456 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:01:22.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 09:01:35.274: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:35.303: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:37.303: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:37.307: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:39.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:39.307: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:41.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:41.308: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:43.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:43.308: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:45.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:45.308: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:47.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:47.308: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:49.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:49.341: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:51.303: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:51.308: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:53.304: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:53.329: INFO: Pod pod-with-poststart-exec-hook still exists May 12 09:01:55.303: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 09:01:55.587: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:01:55.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jzhn5" for this suite. May 12 09:02:18.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:02:18.272: INFO: namespace: e2e-tests-container-lifecycle-hook-jzhn5, resource: bindings, ignored listing per whitelist May 12 09:02:18.297: INFO: namespace e2e-tests-container-lifecycle-hook-jzhn5 deletion completed in 22.70396171s • [SLOW TEST:55.445 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:02:18.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 09:02:18.888: INFO: Waiting up to 5m0s for pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-dzk4n" to be "success or failure" May 12 09:02:19.066: INFO: Pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 177.776221ms May 12 09:02:21.070: INFO: Pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181545339s May 12 09:02:23.074: INFO: Pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185821953s May 12 09:02:25.764: INFO: Pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.876270002s May 12 09:02:28.163: INFO: Pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.274825641s STEP: Saw pod success May 12 09:02:28.163: INFO: Pod "pod-48683f86-942f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 09:02:28.186: INFO: Trying to get logs from node hunter-worker pod pod-48683f86-942f-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 09:02:28.523: INFO: Waiting for pod pod-48683f86-942f-11ea-bb6f-0242ac11001c to disappear May 12 09:02:28.785: INFO: Pod pod-48683f86-942f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:02:28.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-dzk4n" for this suite. May 12 09:02:35.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:02:35.985: INFO: namespace: e2e-tests-emptydir-dzk4n, resource: bindings, ignored listing per whitelist May 12 09:02:36.001: INFO: namespace e2e-tests-emptydir-dzk4n deletion completed in 7.211466583s • [SLOW TEST:17.704 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:02:36.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 09:02:36.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 12 09:02:36.750: INFO: stderr: "" May 12 09:02:36.750: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 12 09:02:36.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wn2sq' May 12 09:02:40.032: INFO: stderr: "" May 12 09:02:40.032: INFO: stdout: "replicationcontroller/redis-master created\n" May 12 09:02:40.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wn2sq' May 12 09:02:40.427: INFO: stderr: "" May 12 09:02:40.427: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 12 09:02:41.433: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:41.433: INFO: Found 0 / 1 May 12 09:02:42.486: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:42.486: INFO: Found 0 / 1 May 12 09:02:43.551: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:43.551: INFO: Found 0 / 1 May 12 09:02:44.431: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:44.431: INFO: Found 0 / 1 May 12 09:02:46.065: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:46.065: INFO: Found 0 / 1 May 12 09:02:46.495: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:46.495: INFO: Found 1 / 1 May 12 09:02:46.495: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 09:02:46.509: INFO: Selector matched 1 pods for map[app:redis] May 12 09:02:46.509: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 09:02:46.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-fn7wg --namespace=e2e-tests-kubectl-wn2sq' May 12 09:02:46.628: INFO: stderr: "" May 12 09:02:46.628: INFO: stdout: "Name: redis-master-fn7wg\nNamespace: e2e-tests-kubectl-wn2sq\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Tue, 12 May 2020 09:02:40 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.206\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://9d6f16e01588ba76e12ac0e52e13e802dfd03964ba1542ee88401496accf874f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 May 2020 09:02:45 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-gjdqw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-gjdqw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-gjdqw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-tests-kubectl-wn2sq/redis-master-fn7wg to hunter-worker\n Normal Pulled 5s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" May 12 09:02:46.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-wn2sq' May 12 09:02:46.746: INFO: stderr: "" May 12 09:02:46.746: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-wn2sq\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-fn7wg\n" May 12 09:02:46.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-wn2sq' May 12 09:02:46.849: INFO: stderr: "" May 12 09:02:46.849: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-wn2sq\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.45.43\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.206:6379\nSession Affinity: None\nEvents: \n" May 12 09:02:46.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 12 09:02:46.972: INFO: stderr: "" May 12 09:02:46.972: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 May 2020 09:02:42 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 May 2020 09:02:42 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 May 2020 09:02:42 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 May 2020 09:02:42 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 12 09:02:46.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-wn2sq' May 12 09:02:47.074: INFO: stderr: "" May 12 09:02:47.074: INFO: stdout: "Name: e2e-tests-kubectl-wn2sq\nLabels: e2e-framework=kubectl\n e2e-run=a0455d6a-941f-11ea-bb6f-0242ac11001c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:02:47.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wn2sq" for this suite. May 12 09:03:11.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:03:11.125: INFO: namespace: e2e-tests-kubectl-wn2sq, resource: bindings, ignored listing per whitelist May 12 09:03:11.164: INFO: namespace e2e-tests-kubectl-wn2sq deletion completed in 24.087647501s • [SLOW TEST:35.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:03:11.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-xzpzd/secret-test-67b72f5f-942f-11ea-bb6f-0242ac11001c STEP: Creating a pod to test consume secrets May 12 09:03:11.339: INFO: Waiting up to 5m0s for pod "pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-secrets-xzpzd" to be "success or failure" May 12 09:03:11.359: INFO: Pod "pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.90341ms May 12 09:03:13.362: INFO: Pod "pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02385393s May 12 09:03:15.366: INFO: Pod "pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.027585468s May 12 09:03:17.438: INFO: Pod "pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09891404s STEP: Saw pod success May 12 09:03:17.438: INFO: Pod "pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 09:03:17.440: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c container env-test: STEP: delete the pod May 12 09:03:17.468: INFO: Waiting for pod pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c to disappear May 12 09:03:17.491: INFO: Pod pod-configmaps-67ba6ceb-942f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:03:17.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xzpzd" for this suite. May 12 09:03:25.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:03:25.556: INFO: namespace: e2e-tests-secrets-xzpzd, resource: bindings, ignored listing per whitelist May 12 09:03:25.592: INFO: namespace e2e-tests-secrets-xzpzd deletion completed in 8.097386752s • [SLOW TEST:14.427 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:03:25.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 09:03:25.754: INFO: Waiting up to 5m0s for pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-emptydir-cds9v" to be "success or failure" May 12 09:03:25.774: INFO: Pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.248409ms May 12 09:03:27.924: INFO: Pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16969589s May 12 09:03:29.927: INFO: Pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173450347s May 12 09:03:32.109: INFO: Pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 6.354786394s May 12 09:03:34.113: INFO: Pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.358483356s STEP: Saw pod success May 12 09:03:34.113: INFO: Pod "pod-7058cc12-942f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 09:03:34.115: INFO: Trying to get logs from node hunter-worker pod pod-7058cc12-942f-11ea-bb6f-0242ac11001c container test-container: STEP: delete the pod May 12 09:03:34.151: INFO: Waiting for pod pod-7058cc12-942f-11ea-bb6f-0242ac11001c to disappear May 12 09:03:34.160: INFO: Pod pod-7058cc12-942f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:03:34.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-cds9v" for this suite. May 12 09:03:40.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:03:40.190: INFO: namespace: e2e-tests-emptydir-cds9v, resource: bindings, ignored listing per whitelist May 12 09:03:40.250: INFO: namespace e2e-tests-emptydir-cds9v deletion completed in 6.086708508s • [SLOW TEST:14.658 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:03:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 09:03:40.377: INFO: Creating deployment "test-recreate-deployment" May 12 09:03:40.407: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 09:03:40.470: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 12 09:03:42.479: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 09:03:42.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724871020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724871020, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724871020, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724871020, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 09:03:44.486: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 09:03:44.498: INFO: Updating deployment test-recreate-deployment May 12 09:03:44.498: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 09:03:45.147: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-wkn46,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wkn46/deployments/test-recreate-deployment,UID:791182a6-942f-11ea-99e8-0242ac110002,ResourceVersion:10131965,Generation:2,CreationTimestamp:2020-05-12 09:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-12 09:03:44 +0000 UTC 2020-05-12 09:03:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-12 09:03:44 +0000 UTC 2020-05-12 09:03:40 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 12 09:03:45.176: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-wkn46,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wkn46/replicasets/test-recreate-deployment-589c4bfd,UID:7b975795-942f-11ea-99e8-0242ac110002,ResourceVersion:10131963,Generation:1,CreationTimestamp:2020-05-12 09:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 791182a6-942f-11ea-99e8-0242ac110002 0xc00096a5ef 0xc00096a600}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 09:03:45.176: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 09:03:45.176: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-wkn46,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wkn46/replicasets/test-recreate-deployment-5bf7f65dc,UID:791f720d-942f-11ea-99e8-0242ac110002,ResourceVersion:10131954,Generation:2,CreationTimestamp:2020-05-12 09:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 791182a6-942f-11ea-99e8-0242ac110002 0xc00096a770 0xc00096a771}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 09:03:45.194: INFO: Pod "test-recreate-deployment-589c4bfd-prz2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-prz2j,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-wkn46,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wkn46/pods/test-recreate-deployment-589c4bfd-prz2j,UID:7b9a55ee-942f-11ea-99e8-0242ac110002,ResourceVersion:10131967,Generation:0,CreationTimestamp:2020-05-12 09:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 7b975795-942f-11ea-99e8-0242ac110002 0xc00190844f 0xc001908460}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bvpf4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bvpf4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bvpf4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019084d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019084f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:03:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:03:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:03:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-12 09:03:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:03:45.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-wkn46" for this suite. May 12 09:03:51.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:03:51.486: INFO: namespace: e2e-tests-deployment-wkn46, resource: bindings, ignored listing per whitelist May 12 09:03:51.504: INFO: namespace e2e-tests-deployment-wkn46 deletion completed in 6.305899322s • [SLOW TEST:11.254 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:03:51.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 12 09:03:51.677: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:04:01.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-p6hvk" for this suite. May 12 09:04:10.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:04:10.186: INFO: namespace: e2e-tests-init-container-p6hvk, resource: bindings, ignored listing per whitelist May 12 09:04:10.236: INFO: namespace e2e-tests-init-container-p6hvk deletion completed in 8.257434137s • [SLOW TEST:18.732 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:04:10.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 09:04:10.512: INFO: Creating deployment "nginx-deployment" May 12 09:04:10.518: INFO: Waiting for observed generation 1 May 12 09:04:12.709: INFO: Waiting for all required pods to come up May 12 09:04:12.713: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 09:04:27.403: INFO: Waiting for deployment "nginx-deployment" to complete May 12 09:04:27.408: INFO: Updating deployment "nginx-deployment" with a non-existent image May 12 09:04:27.414: INFO: Updating deployment nginx-deployment May 12 09:04:27.414: INFO: Waiting for observed generation 2 May 12 09:04:29.949: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 09:04:29.951: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 09:04:29.954: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 09:04:30.803: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 09:04:30.803: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 09:04:31.469: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 12 09:04:32.456: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 12 09:04:32.456: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 12 09:04:33.260: INFO: Updating deployment nginx-deployment May 12 09:04:33.260: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 12 09:04:34.119: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 09:04:34.217: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 12 09:04:34.583: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m7fhv/deployments/nginx-deployment,UID:8b07c85a-942f-11ea-99e8-0242ac110002,ResourceVersion:10132311,Generation:3,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-12 09:04:29 +0000 UTC 2020-05-12 09:04:10 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-12 09:04:34 +0000 UTC 2020-05-12 09:04:34 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 12 09:04:34.684: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m7fhv/replicasets/nginx-deployment-5c98f8fb5,UID:951acd09-942f-11ea-99e8-0242ac110002,ResourceVersion:10132308,Generation:3,CreationTimestamp:2020-05-12 09:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8b07c85a-942f-11ea-99e8-0242ac110002 0xc00203f377 0xc00203f378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 12 09:04:34.684: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 12 09:04:34.684: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m7fhv/replicasets/nginx-deployment-85ddf47c5d,UID:8b12059e-942f-11ea-99e8-0242ac110002,ResourceVersion:10132304,Generation:3,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 8b07c85a-942f-11ea-99e8-0242ac110002 0xc00203f437 0xc00203f438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 12 09:04:35.234: INFO: Pod "nginx-deployment-5c98f8fb5-45ln2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-45ln2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-45ln2,UID:9966da69-942f-11ea-99e8-0242ac110002,ResourceVersion:10132327,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc001b7ffa7 0xc001b7ffa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131e290} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131e2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.234: INFO: Pod "nginx-deployment-5c98f8fb5-gtzvl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gtzvl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-gtzvl,UID:95516a2f-942f-11ea-99e8-0242ac110002,ResourceVersion:10132281,Generation:0,CreationTimestamp:2020-05-12 09:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131e327 0xc00131e328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131e3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131e400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-12 09:04:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.234: INFO: Pod "nginx-deployment-5c98f8fb5-jxv2g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jxv2g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-jxv2g,UID:99704f35-942f-11ea-99e8-0242ac110002,ResourceVersion:10132328,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131e547 0xc00131e548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131e5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131e5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.235: INFO: Pod "nginx-deployment-5c98f8fb5-q425s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q425s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-q425s,UID:99705959-942f-11ea-99e8-0242ac110002,ResourceVersion:10132333,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131e640 0xc00131e641}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131e6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131e700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.235: INFO: Pod "nginx-deployment-5c98f8fb5-rp9mt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rp9mt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-rp9mt,UID:951c11c5-942f-11ea-99e8-0242ac110002,ResourceVersion:10132253,Generation:0,CreationTimestamp:2020-05-12 09:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131e760 0xc00131e761}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131e8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131e8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-12 09:04:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.235: INFO: Pod "nginx-deployment-5c98f8fb5-sj299" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sj299,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-sj299,UID:954c52f8-942f-11ea-99e8-0242ac110002,ResourceVersion:10132275,Generation:0,CreationTimestamp:2020-05-12 09:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131e987 0xc00131e988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131ea90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131eb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-12 09:04:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.236: INFO: Pod "nginx-deployment-5c98f8fb5-vvlk6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vvlk6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-vvlk6,UID:951d15be-942f-11ea-99e8-0242ac110002,ResourceVersion:10132267,Generation:0,CreationTimestamp:2020-05-12 09:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131ec17 0xc00131ec18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131ed10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131ed70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-12 09:04:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.236: INFO: Pod "nginx-deployment-5c98f8fb5-ww96q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ww96q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-5c98f8fb5-ww96q,UID:951d0e85-942f-11ea-99e8-0242ac110002,ResourceVersion:10132303,Generation:0,CreationTimestamp:2020-05-12 09:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 951acd09-942f-11ea-99e8-0242ac110002 0xc00131ee37 0xc00131ee38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131eeb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131eed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.142,StartTime:2020-05-12 09:04:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.236: INFO: Pod "nginx-deployment-85ddf47c5d-4x4nv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4x4nv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-4x4nv,UID:9970d694-942f-11ea-99e8-0242ac110002,ResourceVersion:10132335,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc00131f0a7 0xc00131f0a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131f110} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131f130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.236: INFO: Pod "nginx-deployment-85ddf47c5d-7x9n9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7x9n9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-7x9n9,UID:8b1bc9e7-942f-11ea-99e8-0242ac110002,ResourceVersion:10132189,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc00131f190 0xc00131f191}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131f380} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131f3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.211,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dd69a1ec28dd00b0640d514e0327a1ae900e1ec25a48260a6560a1c39d030126}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.236: INFO: Pod "nginx-deployment-85ddf47c5d-8v56w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8v56w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-8v56w,UID:8b25494b-942f-11ea-99e8-0242ac110002,ResourceVersion:10132210,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc00131f467 0xc00131f468}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131f4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131f620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.140,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4ad838dbad6d5a25cafefc07c49cb91a56caca0601ea49f5db4ae4a189504585}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.237: INFO: Pod "nginx-deployment-85ddf47c5d-9fmfc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9fmfc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-9fmfc,UID:8b1d245c-942f-11ea-99e8-0242ac110002,ResourceVersion:10132204,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc00131f707 0xc00131f708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00131fc40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00131fcf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.213,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:23 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://eac41670b22add5fc07480352059e0fb6610d1fa71b332ee6fbfc05048ac11fa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.237: INFO: Pod "nginx-deployment-85ddf47c5d-cq5rx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cq5rx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-cq5rx,UID:99609474-942f-11ea-99e8-0242ac110002,ResourceVersion:10132323,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944067 0xc002944068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002944270} {node.kubernetes.io/unreachable Exists NoExecute 0xc002944290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.237: INFO: Pod "nginx-deployment-85ddf47c5d-cwq8v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cwq8v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-cwq8v,UID:8b1d2e58-942f-11ea-99e8-0242ac110002,ResourceVersion:10132216,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944337 0xc002944338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002944500} {node.kubernetes.io/unreachable Exists NoExecute 0xc002944530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.139,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1d89375e13c52788138ffa3c9845486a479c86b420efe9ab525da7514a76c93e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.237: INFO: Pod "nginx-deployment-85ddf47c5d-dvrft" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dvrft,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-dvrft,UID:8b1d1d5b-942f-11ea-99e8-0242ac110002,ResourceVersion:10132217,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944607 0xc002944608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029447c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029447f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.212,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4e8f4c6682994f7cf35892935d15ebaf0ac3380db982a0102a46714f75d9ce88}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.237: INFO: Pod "nginx-deployment-85ddf47c5d-gmk7v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gmk7v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-gmk7v,UID:996718bc-942f-11ea-99e8-0242ac110002,ResourceVersion:10132332,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944987 0xc002944988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002944a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002944a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.237: INFO: Pod "nginx-deployment-85ddf47c5d-hj8g2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hj8g2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-hj8g2,UID:8b1d240d-942f-11ea-99e8-0242ac110002,ResourceVersion:10132196,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944ca7 0xc002944ca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002944d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002944d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.138,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6499af31d18da90fd2f4d9736fd59ebf35d8310203a3e33e34841ce9fdd2754f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-kp92s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kp92s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-kp92s,UID:9960960a-942f-11ea-99e8-0242ac110002,ResourceVersion:10132325,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944e07 0xc002944e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002944eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002944ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-pnhwb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pnhwb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-pnhwb,UID:9970e42a-942f-11ea-99e8-0242ac110002,ResourceVersion:10132339,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002944f47 0xc002944f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002944fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002944fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-pnwfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pnwfr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-pnwfr,UID:9970e046-942f-11ea-99e8-0242ac110002,ResourceVersion:10132337,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc0029450a0 0xc0029450a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002945100} {node.kubernetes.io/unreachable Exists NoExecute 0xc002945120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-qhjn8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qhjn8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-qhjn8,UID:9956c371-942f-11ea-99e8-0242ac110002,ResourceVersion:10132316,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945180 0xc002945181}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029451f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002945210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-vfl8c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vfl8c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-vfl8c,UID:9970e2a0-942f-11ea-99e8-0242ac110002,ResourceVersion:10132338,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945317 0xc002945318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002945380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029453a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-vmmpm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vmmpm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-vmmpm,UID:8b19b18e-942f-11ea-99e8-0242ac110002,ResourceVersion:10132171,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945400 0xc002945401}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002945520} {node.kubernetes.io/unreachable Exists NoExecute 0xc002945540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.210,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://528608fcb4365793e05b5303eced816f938858936f5a2de71fbd069ba0558f5c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.238: INFO: Pod "nginx-deployment-85ddf47c5d-ws2tl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ws2tl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-ws2tl,UID:8b1bcf57-942f-11ea-99e8-0242ac110002,ResourceVersion:10132178,Generation:0,CreationTimestamp:2020-05-12 09:04:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945607 0xc002945608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029457c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029457e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.137,StartTime:2020-05-12 09:04:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-12 09:04:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1c6e43f8003a89269e39b3db7306e8c57e28a3bbd65d3f1b91e4ca1c5ca1faf7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.239: INFO: Pod "nginx-deployment-85ddf47c5d-xfglv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfglv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-xfglv,UID:9967223a-942f-11ea-99e8-0242ac110002,ResourceVersion:10132334,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc0029458a7 0xc0029458a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029459c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029459f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.239: INFO: Pod "nginx-deployment-85ddf47c5d-xllr9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xllr9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-xllr9,UID:99671a28-942f-11ea-99e8-0242ac110002,ResourceVersion:10132330,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945a77 0xc002945a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002945b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002945b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.239: INFO: Pod "nginx-deployment-85ddf47c5d-zp7x6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zp7x6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-zp7x6,UID:9967072f-942f-11ea-99e8-0242ac110002,ResourceVersion:10132331,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945c07 0xc002945c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002945cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002945cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 09:04:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 12 09:04:35.239: INFO: Pod "nginx-deployment-85ddf47c5d-zvrzz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zvrzz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-m7fhv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m7fhv/pods/nginx-deployment-85ddf47c5d-zvrzz,UID:9970d648-942f-11ea-99e8-0242ac110002,ResourceVersion:10132336,Generation:0,CreationTimestamp:2020-05-12 09:04:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 8b12059e-942f-11ea-99e8-0242ac110002 0xc002945d67 0xc002945d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wtm44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wtm44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wtm44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002945dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002945df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:04:35.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-m7fhv" for this suite. May 12 09:05:04.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:05:04.428: INFO: namespace: e2e-tests-deployment-m7fhv, resource: bindings, ignored listing per whitelist May 12 09:05:04.491: INFO: namespace e2e-tests-deployment-m7fhv deletion completed in 28.983632752s • [SLOW TEST:54.255 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:05:04.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 09:05:06.165: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 09:05:11.314: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:05:12.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-68fjw" for this suite. May 12 09:05:21.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:05:21.556: INFO: namespace: e2e-tests-replication-controller-68fjw, resource: bindings, ignored listing per whitelist May 12 09:05:21.662: INFO: namespace e2e-tests-replication-controller-68fjw deletion completed in 8.424832145s • [SLOW TEST:17.171 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:05:21.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 12 09:05:22.451: INFO: Waiting up to 5m0s for pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6" in namespace "e2e-tests-svcaccounts-hrdv8" to be "success or failure" May 12 09:05:22.521: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6": Phase="Pending", Reason="", readiness=false. Elapsed: 70.585093ms May 12 09:05:24.525: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074667932s May 12 09:05:26.529: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078517016s May 12 09:05:28.533: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082378103s May 12 09:05:30.555: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6": Phase="Running", Reason="", readiness=false. Elapsed: 8.104476557s May 12 09:05:32.559: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107959433s STEP: Saw pod success May 12 09:05:32.559: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6" satisfied condition "success or failure" May 12 09:05:32.561: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6 container token-test: STEP: delete the pod May 12 09:05:32.602: INFO: Waiting for pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6 to disappear May 12 09:05:32.632: INFO: Pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-7sfr6 no longer exists STEP: Creating a pod to test consume service account root CA May 12 09:05:32.636: INFO: Waiting up to 5m0s for pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59" in namespace "e2e-tests-svcaccounts-hrdv8" to be "success or failure" May 12 09:05:32.811: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59": Phase="Pending", Reason="", readiness=false. Elapsed: 175.161615ms May 12 09:05:34.815: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179268461s May 12 09:05:36.819: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183027006s May 12 09:05:38.826: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190407546s May 12 09:05:40.889: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252959276s May 12 09:05:42.894: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258161545s STEP: Saw pod success May 12 09:05:42.894: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59" satisfied condition "success or failure" May 12 09:05:42.896: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59 container root-ca-test: STEP: delete the pod May 12 09:05:42.944: INFO: Waiting for pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59 to disappear May 12 09:05:42.958: INFO: Pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-2ng59 no longer exists STEP: Creating a pod to test consume service account namespace May 12 09:05:42.963: INFO: Waiting up to 5m0s for pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk" in namespace "e2e-tests-svcaccounts-hrdv8" to be "success or failure" May 12 09:05:42.990: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk": Phase="Pending", Reason="", readiness=false. Elapsed: 26.876259ms May 12 09:05:45.075: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112176259s May 12 09:05:47.078: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115321636s May 12 09:05:49.117: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153989509s May 12 09:05:51.121: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157734635s STEP: Saw pod success May 12 09:05:51.121: INFO: Pod "pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk" satisfied condition "success or failure" May 12 09:05:51.123: INFO: Trying to get logs from node hunter-worker pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk container namespace-test: STEP: delete the pod May 12 09:05:51.470: INFO: Waiting for pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk to disappear May 12 09:05:51.510: INFO: Pod pod-service-account-b5e79e56-942f-11ea-bb6f-0242ac11001c-226mk no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:05:51.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-hrdv8" for this suite. May 12 09:05:59.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:05:59.659: INFO: namespace: e2e-tests-svcaccounts-hrdv8, resource: bindings, ignored listing per whitelist May 12 09:05:59.664: INFO: namespace e2e-tests-svcaccounts-hrdv8 deletion completed in 8.148692868s • [SLOW TEST:38.001 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:05:59.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 12 09:06:03.885: INFO: Waiting up to 5m0s for pod "client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-pods-6x4dv" to be "success or failure" May 12 09:06:03.920: INFO: Pod "client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.537211ms May 12 09:06:05.924: INFO: Pod "client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039218526s May 12 09:06:07.928: INFO: Pod "client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042739425s STEP: Saw pod success May 12 09:06:07.928: INFO: Pod "client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 09:06:07.930: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c container env3cont: STEP: delete the pod May 12 09:06:08.023: INFO: Waiting for pod client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c to disappear May 12 09:06:08.038: INFO: Pod client-envvars-ce99c979-942f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:06:08.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6x4dv" for this suite. May 12 09:06:54.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:06:54.083: INFO: namespace: e2e-tests-pods-6x4dv, resource: bindings, ignored listing per whitelist May 12 09:06:54.143: INFO: namespace e2e-tests-pods-6x4dv deletion completed in 46.10090138s • [SLOW TEST:54.479 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 12 09:06:54.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 12 09:06:54.267: INFO: Waiting up to 5m0s for pod "downward-api-eca16842-942f-11ea-bb6f-0242ac11001c" in namespace "e2e-tests-downward-api-pngkd" to be "success or failure" May 12 09:06:54.308: INFO: Pod "downward-api-eca16842-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.889595ms May 12 09:06:56.312: INFO: Pod "downward-api-eca16842-942f-11ea-bb6f-0242ac11001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044819394s May 12 09:06:58.315: INFO: Pod "downward-api-eca16842-942f-11ea-bb6f-0242ac11001c": Phase="Running", Reason="", readiness=true. Elapsed: 4.048324994s May 12 09:07:00.320: INFO: Pod "downward-api-eca16842-942f-11ea-bb6f-0242ac11001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052478609s STEP: Saw pod success May 12 09:07:00.320: INFO: Pod "downward-api-eca16842-942f-11ea-bb6f-0242ac11001c" satisfied condition "success or failure" May 12 09:07:00.323: INFO: Trying to get logs from node hunter-worker pod downward-api-eca16842-942f-11ea-bb6f-0242ac11001c container dapi-container: STEP: delete the pod May 12 09:07:00.345: INFO: Waiting for pod downward-api-eca16842-942f-11ea-bb6f-0242ac11001c to disappear May 12 09:07:00.350: INFO: Pod downward-api-eca16842-942f-11ea-bb6f-0242ac11001c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 12 09:07:00.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pngkd" for this suite. May 12 09:07:06.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 12 09:07:06.428: INFO: namespace: e2e-tests-downward-api-pngkd, resource: bindings, ignored listing per whitelist May 12 09:07:06.435: INFO: namespace e2e-tests-downward-api-pngkd deletion completed in 6.082810031s • [SLOW TEST:12.292 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSMay 12 09:07:06.436: INFO: Running AfterSuite actions on all nodes May 12 09:07:06.436: INFO: Running AfterSuite actions on node 1 May 12 09:07:06.436: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 7011.580 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS