I0602 10:46:46.077863 6 e2e.go:224] Starting e2e run "5a6787e5-a4be-11ea-889d-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591094805 - Will randomize all specs Will run 201 of 2164 specs Jun 2 10:46:46.272: INFO: >>> kubeConfig: /root/.kube/config Jun 2 10:46:46.278: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 2 10:46:46.293: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 2 10:46:46.326: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 2 10:46:46.326: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 2 10:46:46.326: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 2 10:46:46.334: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 2 10:46:46.334: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 2 10:46:46.334: INFO: e2e test version: v1.13.12 Jun 2 10:46:46.336: INFO: kube-apiserver version: v1.13.12 SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:46:46.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Jun 2 10:46:46.438: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fr99z Jun 2 10:46:50.454: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fr99z STEP: checking the pod's current state and verifying that restartCount is present Jun 2 10:46:50.457: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:50:51.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fr99z" for this suite. Jun 2 10:50:57.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:50:57.094: INFO: namespace: e2e-tests-container-probe-fr99z, resource: bindings, ignored listing per whitelist Jun 2 10:50:57.155: INFO: namespace e2e-tests-container-probe-fr99z deletion completed in 6.093293253s • [SLOW TEST:250.819 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:50:57.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-m56rm STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 2 10:50:57.260: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 2 10:51:17.384: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.204:8080/dial?request=hostName&protocol=http&host=10.244.1.126&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-m56rm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 10:51:17.384: INFO: >>> kubeConfig: /root/.kube/config I0602 10:51:17.407061 6 log.go:172] (0xc000c8b550) (0xc000d672c0) Create stream I0602 10:51:17.407086 6 log.go:172] (0xc000c8b550) (0xc000d672c0) Stream added, broadcasting: 1 I0602 10:51:17.408602 6 log.go:172] (0xc000c8b550) Reply frame received for 1 I0602 10:51:17.408633 6 log.go:172] (0xc000c8b550) (0xc0003fadc0) Create stream I0602 10:51:17.408644 6 log.go:172] (0xc000c8b550) (0xc0003fadc0) Stream added, broadcasting: 3 I0602 10:51:17.409527 6 log.go:172] (0xc000c8b550) Reply frame received for 3 I0602 10:51:17.409554 6 log.go:172] (0xc000c8b550) (0xc0003faf00) Create stream I0602 10:51:17.409563 6 log.go:172] (0xc000c8b550) (0xc0003faf00) Stream added, broadcasting: 5 I0602 10:51:17.410152 6 log.go:172] (0xc000c8b550) Reply frame received for 5 I0602 10:51:17.651912 6 log.go:172] (0xc000c8b550) Data frame received for 3 I0602 10:51:17.651943 6 log.go:172] (0xc0003fadc0) (3) Data frame handling I0602 10:51:17.651963 6 log.go:172] (0xc0003fadc0) (3) Data frame sent I0602 10:51:17.652726 6 log.go:172] (0xc000c8b550) Data frame received for 5 I0602 10:51:17.652751 6 log.go:172] (0xc0003faf00) (5) Data frame handling I0602 10:51:17.652838 6 log.go:172] (0xc000c8b550) Data frame received for 3 I0602 10:51:17.652878 6 log.go:172] (0xc0003fadc0) (3) Data frame handling I0602 10:51:17.654692 6 log.go:172] (0xc000c8b550) Data frame received for 1 I0602 10:51:17.654735 6 log.go:172] (0xc000d672c0) (1) Data frame handling I0602 10:51:17.654779 6 log.go:172] (0xc000d672c0) (1) Data frame sent I0602 10:51:17.654797 6 log.go:172] (0xc000c8b550) (0xc000d672c0) Stream removed, broadcasting: 1 I0602 10:51:17.654812 6 log.go:172] (0xc000c8b550) Go away received I0602 10:51:17.655073 6 log.go:172] (0xc000c8b550) (0xc000d672c0) Stream removed, broadcasting: 1 I0602 10:51:17.655095 6 log.go:172] (0xc000c8b550) (0xc0003fadc0) Stream removed, broadcasting: 3 I0602 10:51:17.655120 6 log.go:172] (0xc000c8b550) (0xc0003faf00) Stream removed, broadcasting: 5 Jun 2 10:51:17.655: INFO: Waiting for endpoints: map[] Jun 2 10:51:17.658: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.204:8080/dial?request=hostName&protocol=http&host=10.244.2.203&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-m56rm PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 10:51:17.658: INFO: >>> kubeConfig: /root/.kube/config I0602 10:51:17.696546 6 log.go:172] (0xc000cb84d0) (0xc000ee61e0) Create stream I0602 10:51:17.696588 6 log.go:172] (0xc000cb84d0) (0xc000ee61e0) Stream added, broadcasting: 1 I0602 10:51:17.699427 6 log.go:172] (0xc000cb84d0) Reply frame received for 1 I0602 10:51:17.699461 6 log.go:172] (0xc000cb84d0) (0xc00053abe0) Create stream I0602 10:51:17.699471 6 log.go:172] (0xc000cb84d0) (0xc00053abe0) Stream added, broadcasting: 3 I0602 10:51:17.700441 6 log.go:172] (0xc000cb84d0) Reply frame received for 3 I0602 10:51:17.700477 6 log.go:172] (0xc000cb84d0) (0xc000ee6280) Create stream I0602 10:51:17.700492 6 log.go:172] (0xc000cb84d0) (0xc000ee6280) Stream added, broadcasting: 5 I0602 10:51:17.702327 6 log.go:172] (0xc000cb84d0) Reply frame received for 5 I0602 10:51:17.761706 6 log.go:172] (0xc000cb84d0) Data frame received for 3 I0602 10:51:17.761765 6 log.go:172] (0xc00053abe0) (3) Data frame handling I0602 10:51:17.761808 6 log.go:172] (0xc00053abe0) (3) Data frame sent I0602 10:51:17.762463 6 log.go:172] (0xc000cb84d0) Data frame received for 5 I0602 10:51:17.762486 6 log.go:172] (0xc000ee6280) (5) Data frame handling I0602 10:51:17.762510 6 log.go:172] (0xc000cb84d0) Data frame received for 3 I0602 10:51:17.762515 6 log.go:172] (0xc00053abe0) (3) Data frame handling I0602 10:51:17.763686 6 log.go:172] (0xc000cb84d0) Data frame received for 1 I0602 10:51:17.763708 6 log.go:172] (0xc000ee61e0) (1) Data frame handling I0602 10:51:17.763736 6 log.go:172] (0xc000ee61e0) (1) Data frame sent I0602 10:51:17.763754 6 log.go:172] (0xc000cb84d0) (0xc000ee61e0) Stream removed, broadcasting: 1 I0602 10:51:17.763785 6 log.go:172] (0xc000cb84d0) Go away received I0602 10:51:17.763853 6 log.go:172] (0xc000cb84d0) (0xc000ee61e0) Stream removed, broadcasting: 1 I0602 10:51:17.763872 6 log.go:172] (0xc000cb84d0) (0xc00053abe0) Stream removed, broadcasting: 3 I0602 10:51:17.763887 6 log.go:172] (0xc000cb84d0) (0xc000ee6280) Stream removed, broadcasting: 5 Jun 2 10:51:17.763: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:51:17.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-m56rm" for this suite. Jun 2 10:51:39.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:51:39.835: INFO: namespace: e2e-tests-pod-network-test-m56rm, resource: bindings, ignored listing per whitelist Jun 2 10:51:39.857: INFO: namespace e2e-tests-pod-network-test-m56rm deletion completed in 22.090403664s • [SLOW TEST:42.702 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:51:39.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-09e2cb24-a4bf-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 10:51:40.001: INFO: Waiting up to 5m0s for pod "pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-f6hp4" to be "success or failure" Jun 2 10:51:40.031: INFO: Pod "pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.798512ms Jun 2 10:51:42.035: INFO: Pod "pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033608468s Jun 2 10:51:44.039: INFO: Pod "pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03760448s STEP: Saw pod success Jun 2 10:51:44.039: INFO: Pod "pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:51:44.041: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 10:51:44.064: INFO: Waiting for pod pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:51:44.094: INFO: Pod pod-configmaps-09e4f34d-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:51:44.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-f6hp4" for this suite. Jun 2 10:51:50.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:51:50.143: INFO: namespace: e2e-tests-configmap-f6hp4, resource: bindings, ignored listing per whitelist Jun 2 10:51:50.188: INFO: namespace e2e-tests-configmap-f6hp4 deletion completed in 6.089992516s • [SLOW TEST:10.330 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:51:50.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 10:51:50.314: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 2 10:51:50.344: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:50.347: INFO: Number of nodes with available pods: 0 Jun 2 10:51:50.347: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:51:51.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:51.355: INFO: Number of nodes with available pods: 0 Jun 2 10:51:51.355: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:51:52.467: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:52.470: INFO: Number of nodes with available pods: 0 Jun 2 10:51:52.471: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:51:53.366: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:53.369: INFO: Number of nodes with available pods: 0 Jun 2 10:51:53.369: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:51:54.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:54.355: INFO: Number of nodes with available pods: 0 Jun 2 10:51:54.355: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:51:55.352: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:55.355: INFO: Number of nodes with available pods: 2 Jun 2 10:51:55.355: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 2 10:51:55.392: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:55.392: INFO: Wrong image for pod: daemon-set-hbd2j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:55.413: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:56.421: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:56.421: INFO: Wrong image for pod: daemon-set-hbd2j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:56.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:57.424: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:57.424: INFO: Wrong image for pod: daemon-set-hbd2j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:57.427: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:58.416: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:58.416: INFO: Wrong image for pod: daemon-set-hbd2j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:58.420: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:51:59.419: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:59.419: INFO: Wrong image for pod: daemon-set-hbd2j. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:51:59.419: INFO: Pod daemon-set-hbd2j is not available Jun 2 10:51:59.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:00.418: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:52:00.418: INFO: Pod daemon-set-xg2h6 is not available Jun 2 10:52:00.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:01.485: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:52:01.485: INFO: Pod daemon-set-xg2h6 is not available Jun 2 10:52:01.489: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:02.417: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:52:02.417: INFO: Pod daemon-set-xg2h6 is not available Jun 2 10:52:02.419: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:03.418: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:52:03.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:04.418: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:52:04.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:05.417: INFO: Wrong image for pod: daemon-set-h4c2f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 2 10:52:05.417: INFO: Pod daemon-set-h4c2f is not available Jun 2 10:52:05.420: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:06.418: INFO: Pod daemon-set-rvq2f is not available Jun 2 10:52:06.424: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 2 10:52:06.427: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:06.430: INFO: Number of nodes with available pods: 1 Jun 2 10:52:06.430: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:52:07.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:07.438: INFO: Number of nodes with available pods: 1 Jun 2 10:52:07.438: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:52:08.435: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:08.438: INFO: Number of nodes with available pods: 1 Jun 2 10:52:08.438: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:52:09.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 10:52:09.438: INFO: Number of nodes with available pods: 2 Jun 2 10:52:09.438: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zkz2w, will wait for the garbage collector to delete the pods Jun 2 10:52:09.512: INFO: Deleting DaemonSet.extensions daemon-set took: 6.572187ms Jun 2 10:52:09.612: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.208973ms Jun 2 10:52:13.715: INFO: Number of nodes with available pods: 0 Jun 2 10:52:13.715: INFO: Number of running nodes: 0, number of available pods: 0 Jun 2 10:52:13.718: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zkz2w/daemonsets","resourceVersion":"13815423"},"items":null} Jun 2 10:52:13.721: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zkz2w/pods","resourceVersion":"13815423"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:52:13.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zkz2w" for this suite. Jun 2 10:52:19.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:52:19.760: INFO: namespace: e2e-tests-daemonsets-zkz2w, resource: bindings, ignored listing per whitelist Jun 2 10:52:19.827: INFO: namespace e2e-tests-daemonsets-zkz2w deletion completed in 6.093538771s • [SLOW TEST:29.639 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:52:19.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-21b54643-a4bf-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 10:52:19.965: INFO: Waiting up to 5m0s for pod "pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-pcxv2" to be "success or failure" Jun 2 10:52:19.982: INFO: Pod "pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.772956ms Jun 2 10:52:21.986: INFO: Pod "pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020856102s Jun 2 10:52:23.991: INFO: Pod "pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025466242s STEP: Saw pod success Jun 2 10:52:23.991: INFO: Pod "pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:52:23.994: INFO: Trying to get logs from node hunter-worker pod pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 10:52:24.156: INFO: Waiting for pod pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:52:24.186: INFO: Pod pod-secrets-21b72b7c-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:52:24.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pcxv2" for this suite. Jun 2 10:52:30.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:52:30.269: INFO: namespace: e2e-tests-secrets-pcxv2, resource: bindings, ignored listing per whitelist Jun 2 10:52:30.286: INFO: namespace e2e-tests-secrets-pcxv2 deletion completed in 6.097222568s • [SLOW TEST:10.459 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:52:30.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jb2v8 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 2 10:52:30.400: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 2 10:52:54.516: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.130:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-jb2v8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 10:52:54.516: INFO: >>> kubeConfig: /root/.kube/config I0602 10:52:54.550202 6 log.go:172] (0xc000cb84d0) (0xc00178e820) Create stream I0602 10:52:54.550239 6 log.go:172] (0xc000cb84d0) (0xc00178e820) Stream added, broadcasting: 1 I0602 10:52:54.553498 6 log.go:172] (0xc000cb84d0) Reply frame received for 1 I0602 10:52:54.553559 6 log.go:172] (0xc000cb84d0) (0xc0010ea000) Create stream I0602 10:52:54.553577 6 log.go:172] (0xc000cb84d0) (0xc0010ea000) Stream added, broadcasting: 3 I0602 10:52:54.554643 6 log.go:172] (0xc000cb84d0) Reply frame received for 3 I0602 10:52:54.554692 6 log.go:172] (0xc000cb84d0) (0xc00069e0a0) Create stream I0602 10:52:54.554710 6 log.go:172] (0xc000cb84d0) (0xc00069e0a0) Stream added, broadcasting: 5 I0602 10:52:54.555683 6 log.go:172] (0xc000cb84d0) Reply frame received for 5 I0602 10:52:54.656907 6 log.go:172] (0xc000cb84d0) Data frame received for 3 I0602 10:52:54.656958 6 log.go:172] (0xc0010ea000) (3) Data frame handling I0602 10:52:54.656994 6 log.go:172] (0xc0010ea000) (3) Data frame sent I0602 10:52:54.657067 6 log.go:172] (0xc000cb84d0) Data frame received for 3 I0602 10:52:54.657331 6 log.go:172] (0xc0010ea000) (3) Data frame handling I0602 10:52:54.657476 6 log.go:172] (0xc000cb84d0) Data frame received for 5 I0602 10:52:54.657503 6 log.go:172] (0xc00069e0a0) (5) Data frame handling I0602 10:52:54.659483 6 log.go:172] (0xc000cb84d0) Data frame received for 1 I0602 10:52:54.659523 6 log.go:172] (0xc00178e820) (1) Data frame handling I0602 10:52:54.659661 6 log.go:172] (0xc00178e820) (1) Data frame sent I0602 10:52:54.659690 6 log.go:172] (0xc000cb84d0) (0xc00178e820) Stream removed, broadcasting: 1 I0602 10:52:54.659752 6 log.go:172] (0xc000cb84d0) Go away received I0602 10:52:54.659844 6 log.go:172] (0xc000cb84d0) (0xc00178e820) Stream removed, broadcasting: 1 I0602 10:52:54.659879 6 log.go:172] (0xc000cb84d0) (0xc0010ea000) Stream removed, broadcasting: 3 I0602 10:52:54.659910 6 log.go:172] (0xc000cb84d0) (0xc00069e0a0) Stream removed, broadcasting: 5 Jun 2 10:52:54.659: INFO: Found all expected endpoints: [netserver-0] Jun 2 10:52:54.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.208:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-jb2v8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 10:52:54.664: INFO: >>> kubeConfig: /root/.kube/config I0602 10:52:54.703991 6 log.go:172] (0xc00087dad0) (0xc0010ea320) Create stream I0602 10:52:54.704025 6 log.go:172] (0xc00087dad0) (0xc0010ea320) Stream added, broadcasting: 1 I0602 10:52:54.706849 6 log.go:172] (0xc00087dad0) Reply frame received for 1 I0602 10:52:54.706936 6 log.go:172] (0xc00087dad0) (0xc00069e320) Create stream I0602 10:52:54.706961 6 log.go:172] (0xc00087dad0) (0xc00069e320) Stream added, broadcasting: 3 I0602 10:52:54.708227 6 log.go:172] (0xc00087dad0) Reply frame received for 3 I0602 10:52:54.708290 6 log.go:172] (0xc00087dad0) (0xc000768780) Create stream I0602 10:52:54.708316 6 log.go:172] (0xc00087dad0) (0xc000768780) Stream added, broadcasting: 5 I0602 10:52:54.709537 6 log.go:172] (0xc00087dad0) Reply frame received for 5 I0602 10:52:54.786167 6 log.go:172] (0xc00087dad0) Data frame received for 3 I0602 10:52:54.786214 6 log.go:172] (0xc00069e320) (3) Data frame handling I0602 10:52:54.786240 6 log.go:172] (0xc00069e320) (3) Data frame sent I0602 10:52:54.786348 6 log.go:172] (0xc00087dad0) Data frame received for 3 I0602 10:52:54.786439 6 log.go:172] (0xc00069e320) (3) Data frame handling I0602 10:52:54.786480 6 log.go:172] (0xc00087dad0) Data frame received for 5 I0602 10:52:54.786502 6 log.go:172] (0xc000768780) (5) Data frame handling I0602 10:52:54.787948 6 log.go:172] (0xc00087dad0) Data frame received for 1 I0602 10:52:54.787964 6 log.go:172] (0xc0010ea320) (1) Data frame handling I0602 10:52:54.787977 6 log.go:172] (0xc0010ea320) (1) Data frame sent I0602 10:52:54.787992 6 log.go:172] (0xc00087dad0) (0xc0010ea320) Stream removed, broadcasting: 1 I0602 10:52:54.788006 6 log.go:172] (0xc00087dad0) Go away received I0602 10:52:54.788157 6 log.go:172] (0xc00087dad0) (0xc0010ea320) Stream removed, broadcasting: 1 I0602 10:52:54.788205 6 log.go:172] (0xc00087dad0) (0xc00069e320) Stream removed, broadcasting: 3 I0602 10:52:54.788227 6 log.go:172] (0xc00087dad0) (0xc000768780) Stream removed, broadcasting: 5 Jun 2 10:52:54.788: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:52:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-jb2v8" for this suite. Jun 2 10:53:16.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:53:16.882: INFO: namespace: e2e-tests-pod-network-test-jb2v8, resource: bindings, ignored listing per whitelist Jun 2 10:53:16.891: INFO: namespace e2e-tests-pod-network-test-jb2v8 deletion completed in 22.098757986s • [SLOW TEST:46.604 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:53:16.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jun 2 10:53:16.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sqf8r' Jun 2 10:53:19.233: INFO: stderr: "" Jun 2 10:53:19.233: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jun 2 10:53:20.237: INFO: Selector matched 1 pods for map[app:redis] Jun 2 10:53:20.237: INFO: Found 0 / 1 Jun 2 10:53:21.237: INFO: Selector matched 1 pods for map[app:redis] Jun 2 10:53:21.237: INFO: Found 0 / 1 Jun 2 10:53:22.239: INFO: Selector matched 1 pods for map[app:redis] Jun 2 10:53:22.239: INFO: Found 0 / 1 Jun 2 10:53:23.242: INFO: Selector matched 1 pods for map[app:redis] Jun 2 10:53:23.242: INFO: Found 1 / 1 Jun 2 10:53:23.242: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 2 10:53:23.244: INFO: Selector matched 1 pods for map[app:redis] Jun 2 10:53:23.244: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 2 10:53:23.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qjl58 redis-master --namespace=e2e-tests-kubectl-sqf8r' Jun 2 10:53:23.368: INFO: stderr: "" Jun 2 10:53:23.368: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 Jun 10:53:21.995 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jun 10:53:21.995 # Server started, Redis version 3.2.12\n1:M 02 Jun 10:53:21.995 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jun 10:53:21.995 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 2 10:53:23.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qjl58 redis-master --namespace=e2e-tests-kubectl-sqf8r --tail=1' Jun 2 10:53:23.477: INFO: stderr: "" Jun 2 10:53:23.477: INFO: stdout: "1:M 02 Jun 10:53:21.995 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 2 10:53:23.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qjl58 redis-master --namespace=e2e-tests-kubectl-sqf8r --limit-bytes=1' Jun 2 10:53:23.596: INFO: stderr: "" Jun 2 10:53:23.596: INFO: stdout: " " STEP: exposing timestamps Jun 2 10:53:23.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qjl58 redis-master --namespace=e2e-tests-kubectl-sqf8r --tail=1 --timestamps' Jun 2 10:53:23.724: INFO: stderr: "" Jun 2 10:53:23.724: INFO: stdout: "2020-06-02T10:53:21.995824002Z 1:M 02 Jun 10:53:21.995 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 2 10:53:26.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qjl58 redis-master --namespace=e2e-tests-kubectl-sqf8r --since=1s' Jun 2 10:53:26.346: INFO: stderr: "" Jun 2 10:53:26.346: INFO: stdout: "" Jun 2 10:53:26.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qjl58 redis-master --namespace=e2e-tests-kubectl-sqf8r --since=24h' Jun 2 10:53:26.454: INFO: stderr: "" Jun 2 10:53:26.454: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 Jun 10:53:21.995 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jun 10:53:21.995 # Server started, Redis version 3.2.12\n1:M 02 Jun 10:53:21.995 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jun 10:53:21.995 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jun 2 10:53:26.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sqf8r' Jun 2 10:53:26.556: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 10:53:26.556: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 2 10:53:26.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-sqf8r' Jun 2 10:53:26.662: INFO: stderr: "No resources found.\n" Jun 2 10:53:26.662: INFO: stdout: "" Jun 2 10:53:26.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-sqf8r -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 2 10:53:26.755: INFO: stderr: "" Jun 2 10:53:26.755: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:53:26.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sqf8r" for this suite. Jun 2 10:53:32.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:53:32.997: INFO: namespace: e2e-tests-kubectl-sqf8r, resource: bindings, ignored listing per whitelist Jun 2 10:53:33.016: INFO: namespace e2e-tests-kubectl-sqf8r deletion completed in 6.256701108s • [SLOW TEST:16.125 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:53:33.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4d522119-a4bf-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 10:53:33.208: INFO: Waiting up to 5m0s for pod "pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-8rrw2" to be "success or failure" Jun 2 10:53:33.222: INFO: Pod "pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.59824ms Jun 2 10:53:35.271: INFO: Pod "pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062421954s Jun 2 10:53:37.276: INFO: Pod "pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067265239s STEP: Saw pod success Jun 2 10:53:37.276: INFO: Pod "pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:53:37.279: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 10:53:37.367: INFO: Waiting for pod pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:53:37.369: INFO: Pod pod-secrets-4d5f5636-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:53:37.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8rrw2" for this suite. Jun 2 10:53:43.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:53:43.454: INFO: namespace: e2e-tests-secrets-8rrw2, resource: bindings, ignored listing per whitelist Jun 2 10:53:43.490: INFO: namespace e2e-tests-secrets-8rrw2 deletion completed in 6.118668846s STEP: Destroying namespace "e2e-tests-secret-namespace-snt4q" for this suite. Jun 2 10:53:49.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:53:49.546: INFO: namespace: e2e-tests-secret-namespace-snt4q, resource: bindings, ignored listing per whitelist Jun 2 10:53:49.596: INFO: namespace e2e-tests-secret-namespace-snt4q deletion completed in 6.10527753s • [SLOW TEST:16.580 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:53:49.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 2 10:53:57.845: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:53:57.883: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:53:59.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:53:59.887: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:54:01.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:54:01.887: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:54:03.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:54:03.888: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:54:05.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:54:05.888: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:54:07.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:54:07.888: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:54:09.883: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:54:09.887: INFO: Pod pod-with-prestop-http-hook still exists Jun 2 10:54:11.884: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 2 10:54:11.888: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:54:11.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pvnnx" for this suite. Jun 2 10:54:33.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:54:33.963: INFO: namespace: e2e-tests-container-lifecycle-hook-pvnnx, resource: bindings, ignored listing per whitelist Jun 2 10:54:33.986: INFO: namespace e2e-tests-container-lifecycle-hook-pvnnx deletion completed in 22.087956906s • [SLOW TEST:44.390 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:54:33.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 10:54:34.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-xkfvg" to be "success or failure" Jun 2 10:54:34.103: INFO: Pod "downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.33011ms Jun 2 10:54:36.108: INFO: Pod "downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032050757s Jun 2 10:54:38.115: INFO: Pod "downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039416315s STEP: Saw pod success Jun 2 10:54:38.115: INFO: Pod "downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:54:38.118: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 10:54:38.197: INFO: Waiting for pod downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:54:38.206: INFO: Pod downwardapi-volume-71a595b1-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:54:38.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xkfvg" for this suite. Jun 2 10:54:44.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:54:44.333: INFO: namespace: e2e-tests-projected-xkfvg, resource: bindings, ignored listing per whitelist Jun 2 10:54:44.335: INFO: namespace e2e-tests-projected-xkfvg deletion completed in 6.125216288s • [SLOW TEST:10.349 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:54:44.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jun 2 10:54:44.465: INFO: Waiting up to 5m0s for pod "client-containers-77d85629-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-containers-7pd6v" to be "success or failure" Jun 2 10:54:44.470: INFO: Pod "client-containers-77d85629-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.04237ms Jun 2 10:54:46.474: INFO: Pod "client-containers-77d85629-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009439835s Jun 2 10:54:48.478: INFO: Pod "client-containers-77d85629-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013448723s STEP: Saw pod success Jun 2 10:54:48.478: INFO: Pod "client-containers-77d85629-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:54:48.481: INFO: Trying to get logs from node hunter-worker pod client-containers-77d85629-a4bf-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 10:54:48.500: INFO: Waiting for pod client-containers-77d85629-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:54:48.514: INFO: Pod client-containers-77d85629-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:54:48.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-7pd6v" for this suite. Jun 2 10:54:54.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:54:54.573: INFO: namespace: e2e-tests-containers-7pd6v, resource: bindings, ignored listing per whitelist Jun 2 10:54:54.613: INFO: namespace e2e-tests-containers-7pd6v deletion completed in 6.096113636s • [SLOW TEST:10.277 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:54:54.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018 Jun 2 10:54:54.719: INFO: Pod name my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018: Found 0 pods out of 1 Jun 2 10:54:59.723: INFO: Pod name my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018: Found 1 pods out of 1 Jun 2 10:54:59.723: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018" are running Jun 2 10:54:59.725: INFO: Pod "my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018-d8wv4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 10:54:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 10:54:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 10:54:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 10:54:54 +0000 UTC Reason: Message:}]) Jun 2 10:54:59.726: INFO: Trying to dial the pod Jun 2 10:55:04.737: INFO: Controller my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018: Got expected result from replica 1 [my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018-d8wv4]: "my-hostname-basic-7df38daf-a4bf-11ea-889d-0242ac110018-d8wv4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:55:04.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-qzlgd" for this suite. Jun 2 10:55:10.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:55:10.797: INFO: namespace: e2e-tests-replication-controller-qzlgd, resource: bindings, ignored listing per whitelist Jun 2 10:55:10.834: INFO: namespace e2e-tests-replication-controller-qzlgd deletion completed in 6.092586963s • [SLOW TEST:16.221 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:55:10.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-rfl48 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-rfl48 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-rfl48 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-rfl48 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-rfl48 Jun 2 10:55:16.990: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-rfl48, name: ss-0, uid: 8b277185-a4bf-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Jun 2 10:55:17.410: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-rfl48, name: ss-0, uid: 8b277185-a4bf-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 2 10:55:17.440: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-rfl48, name: ss-0, uid: 8b277185-a4bf-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 2 10:55:17.446: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-rfl48 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-rfl48 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-rfl48 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 2 10:55:23.525: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rfl48 Jun 2 10:55:23.531: INFO: Scaling statefulset ss to 0 Jun 2 10:55:33.562: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 10:55:33.564: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:55:33.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-rfl48" for this suite. Jun 2 10:55:39.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:55:39.604: INFO: namespace: e2e-tests-statefulset-rfl48, resource: bindings, ignored listing per whitelist Jun 2 10:55:39.669: INFO: namespace e2e-tests-statefulset-rfl48 deletion completed in 6.092636817s • [SLOW TEST:28.834 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:55:39.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 10:55:39.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-99dbl' Jun 2 10:55:39.840: INFO: stderr: "" Jun 2 10:55:39.840: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 2 10:55:44.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-99dbl -o json' Jun 2 10:55:45.004: INFO: stderr: "" Jun 2 10:55:45.004: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-02T10:55:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-99dbl\",\n \"resourceVersion\": \"13816291\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-99dbl/pods/e2e-test-nginx-pod\",\n \"uid\": \"98d80e0b-a4bf-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-gpzc4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-gpzc4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-gpzc4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T10:55:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T10:55:43Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T10:55:43Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-02T10:55:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d2a3bf5a3b998357ccaa0557b1ce18e96e096d741460611c82bd1ae605946b52\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-02T10:55:42Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.138\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-02T10:55:39Z\"\n }\n}\n" STEP: replace the image in the pod Jun 2 10:55:45.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-99dbl' Jun 2 10:55:45.270: INFO: stderr: "" Jun 2 10:55:45.270: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jun 2 10:55:45.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-99dbl' Jun 2 10:55:51.268: INFO: stderr: "" Jun 2 10:55:51.268: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:55:51.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-99dbl" for this suite. Jun 2 10:55:57.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:55:57.354: INFO: namespace: e2e-tests-kubectl-99dbl, resource: bindings, ignored listing per whitelist Jun 2 10:55:57.359: INFO: namespace e2e-tests-kubectl-99dbl deletion completed in 6.087786825s • [SLOW TEST:17.691 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:55:57.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 10:55:57.477: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 2 10:55:57.483: INFO: Number of nodes with available pods: 0 Jun 2 10:55:57.483: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 2 10:55:57.528: INFO: Number of nodes with available pods: 0 Jun 2 10:55:57.528: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:55:58.531: INFO: Number of nodes with available pods: 0 Jun 2 10:55:58.531: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:55:59.532: INFO: Number of nodes with available pods: 0 Jun 2 10:55:59.532: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:00.531: INFO: Number of nodes with available pods: 1 Jun 2 10:56:00.531: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 2 10:56:00.558: INFO: Number of nodes with available pods: 1 Jun 2 10:56:00.558: INFO: Number of running nodes: 0, number of available pods: 1 Jun 2 10:56:01.562: INFO: Number of nodes with available pods: 0 Jun 2 10:56:01.562: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 2 10:56:01.572: INFO: Number of nodes with available pods: 0 Jun 2 10:56:01.572: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:02.577: INFO: Number of nodes with available pods: 0 Jun 2 10:56:02.577: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:03.578: INFO: Number of nodes with available pods: 0 Jun 2 10:56:03.578: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:04.576: INFO: Number of nodes with available pods: 0 Jun 2 10:56:04.576: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:05.577: INFO: Number of nodes with available pods: 0 Jun 2 10:56:05.577: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:06.576: INFO: Number of nodes with available pods: 0 Jun 2 10:56:06.576: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:07.577: INFO: Number of nodes with available pods: 0 Jun 2 10:56:07.577: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:08.576: INFO: Number of nodes with available pods: 0 Jun 2 10:56:08.576: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:09.577: INFO: Number of nodes with available pods: 0 Jun 2 10:56:09.577: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:10.576: INFO: Number of nodes with available pods: 0 Jun 2 10:56:10.576: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:11.578: INFO: Number of nodes with available pods: 0 Jun 2 10:56:11.578: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:12.577: INFO: Number of nodes with available pods: 0 Jun 2 10:56:12.577: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:13.576: INFO: Number of nodes with available pods: 0 Jun 2 10:56:13.576: INFO: Node hunter-worker is running more than one daemon pod Jun 2 10:56:14.576: INFO: Number of nodes with available pods: 1 Jun 2 10:56:14.576: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-h2vgx, will wait for the garbage collector to delete the pods Jun 2 10:56:14.641: INFO: Deleting DaemonSet.extensions daemon-set took: 6.746961ms Jun 2 10:56:14.742: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.434642ms Jun 2 10:56:21.368: INFO: Number of nodes with available pods: 0 Jun 2 10:56:21.368: INFO: Number of running nodes: 0, number of available pods: 0 Jun 2 10:56:21.371: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-h2vgx/daemonsets","resourceVersion":"13816440"},"items":null} Jun 2 10:56:21.373: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-h2vgx/pods","resourceVersion":"13816440"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:56:21.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-h2vgx" for this suite. Jun 2 10:56:27.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:56:27.454: INFO: namespace: e2e-tests-daemonsets-h2vgx, resource: bindings, ignored listing per whitelist Jun 2 10:56:27.496: INFO: namespace e2e-tests-daemonsets-h2vgx deletion completed in 6.080710933s • [SLOW TEST:30.136 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:56:27.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 10:56:27.604: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:56:31.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-xch75" for this suite. Jun 2 10:57:13.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:57:13.917: INFO: namespace: e2e-tests-pods-xch75, resource: bindings, ignored listing per whitelist Jun 2 10:57:13.942: INFO: namespace e2e-tests-pods-xch75 deletion completed in 42.124348272s • [SLOW TEST:46.446 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:57:13.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 10:57:14.055: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:57:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2s2gp" for this suite. Jun 2 10:58:04.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:58:04.226: INFO: namespace: e2e-tests-pods-2s2gp, resource: bindings, ignored listing per whitelist Jun 2 10:58:04.238: INFO: namespace e2e-tests-pods-2s2gp deletion completed in 46.150880147s • [SLOW TEST:50.296 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:58:04.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 2 10:58:04.377: INFO: Waiting up to 5m0s for pod "pod-ef0082cd-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-z7b6z" to be "success or failure" Jun 2 10:58:04.383: INFO: Pod "pod-ef0082cd-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.825805ms Jun 2 10:58:06.388: INFO: Pod "pod-ef0082cd-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010360716s Jun 2 10:58:08.430: INFO: Pod "pod-ef0082cd-a4bf-11ea-889d-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.052991386s Jun 2 10:58:10.434: INFO: Pod "pod-ef0082cd-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056448916s STEP: Saw pod success Jun 2 10:58:10.434: INFO: Pod "pod-ef0082cd-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:58:10.436: INFO: Trying to get logs from node hunter-worker2 pod pod-ef0082cd-a4bf-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 10:58:10.482: INFO: Waiting for pod pod-ef0082cd-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:58:10.486: INFO: Pod pod-ef0082cd-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:58:10.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z7b6z" for this suite. Jun 2 10:58:16.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:58:16.577: INFO: namespace: e2e-tests-emptydir-z7b6z, resource: bindings, ignored listing per whitelist Jun 2 10:58:16.584: INFO: namespace e2e-tests-emptydir-z7b6z deletion completed in 6.093773752s • [SLOW TEST:12.345 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:58:16.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 10:58:16.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-27rz9" to be "success or failure" Jun 2 10:58:16.780: INFO: Pod "downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.762353ms Jun 2 10:58:18.784: INFO: Pod "downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033988602s Jun 2 10:58:20.788: INFO: Pod "downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038203283s STEP: Saw pod success Jun 2 10:58:20.788: INFO: Pod "downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:58:20.791: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 10:58:20.916: INFO: Waiting for pod downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:58:20.929: INFO: Pod downwardapi-volume-f65a2595-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:58:20.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-27rz9" for this suite. Jun 2 10:58:26.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:58:26.953: INFO: namespace: e2e-tests-projected-27rz9, resource: bindings, ignored listing per whitelist Jun 2 10:58:27.042: INFO: namespace e2e-tests-projected-27rz9 deletion completed in 6.10929236s • [SLOW TEST:10.459 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:58:27.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-rcv2w/configmap-test-fc98edc5-a4bf-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 10:58:27.194: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-rcv2w" to be "success or failure" Jun 2 10:58:27.198: INFO: Pod "pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.772392ms Jun 2 10:58:29.202: INFO: Pod "pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007720501s Jun 2 10:58:31.206: INFO: Pod "pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01146233s STEP: Saw pod success Jun 2 10:58:31.206: INFO: Pod "pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:58:31.209: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018 container env-test: STEP: delete the pod Jun 2 10:58:31.251: INFO: Waiting for pod pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018 to disappear Jun 2 10:58:31.258: INFO: Pod pod-configmaps-fc996d8c-a4bf-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:58:31.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rcv2w" for this suite. Jun 2 10:58:37.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:58:37.303: INFO: namespace: e2e-tests-configmap-rcv2w, resource: bindings, ignored listing per whitelist Jun 2 10:58:37.374: INFO: namespace e2e-tests-configmap-rcv2w deletion completed in 6.112945753s • [SLOW TEST:10.332 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:58:37.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-02b83736-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 10:58:37.470: INFO: Waiting up to 5m0s for pod "pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-vv5rb" to be "success or failure" Jun 2 10:58:37.474: INFO: Pod "pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.697428ms Jun 2 10:58:39.724: INFO: Pod "pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253395396s Jun 2 10:58:41.727: INFO: Pod "pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.256606015s STEP: Saw pod success Jun 2 10:58:41.727: INFO: Pod "pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:58:41.729: INFO: Trying to get logs from node hunter-worker pod pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 10:58:41.805: INFO: Waiting for pod pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 10:58:41.816: INFO: Pod pod-secrets-02b9ba49-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:58:41.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vv5rb" for this suite. Jun 2 10:58:47.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:58:47.960: INFO: namespace: e2e-tests-secrets-vv5rb, resource: bindings, ignored listing per whitelist Jun 2 10:58:47.968: INFO: namespace e2e-tests-secrets-vv5rb deletion completed in 6.148413304s • [SLOW TEST:10.594 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:58:47.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-090f190c-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 10:58:48.125: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-hnhd6" to be "success or failure" Jun 2 10:58:48.148: INFO: Pod "pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.590578ms Jun 2 10:58:50.151: INFO: Pod "pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026126474s Jun 2 10:58:52.156: INFO: Pod "pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03118302s STEP: Saw pod success Jun 2 10:58:52.156: INFO: Pod "pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:58:52.159: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 2 10:58:52.253: INFO: Waiting for pod pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 10:58:52.259: INFO: Pod pod-projected-secrets-09118f44-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:58:52.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hnhd6" for this suite. Jun 2 10:58:58.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:58:58.298: INFO: namespace: e2e-tests-projected-hnhd6, resource: bindings, ignored listing per whitelist Jun 2 10:58:58.344: INFO: namespace e2e-tests-projected-hnhd6 deletion completed in 6.08100596s • [SLOW TEST:10.375 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:58:58.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-0f3f38e0-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 10:58:58.514: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-6wxz5" to be "success or failure" Jun 2 10:58:58.535: INFO: Pod "pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.664851ms Jun 2 10:59:00.589: INFO: Pod "pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075028444s Jun 2 10:59:02.593: INFO: Pod "pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079540261s STEP: Saw pod success Jun 2 10:59:02.594: INFO: Pod "pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:59:02.596: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 10:59:02.615: INFO: Waiting for pod pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 10:59:02.666: INFO: Pod pod-projected-configmaps-0f43a51a-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:59:02.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6wxz5" for this suite. Jun 2 10:59:08.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:59:08.735: INFO: namespace: e2e-tests-projected-6wxz5, resource: bindings, ignored listing per whitelist Jun 2 10:59:08.759: INFO: namespace e2e-tests-projected-6wxz5 deletion completed in 6.089590413s • [SLOW TEST:10.415 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:59:08.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-15783dfe-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 10:59:08.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-6pjzj" to be "success or failure" Jun 2 10:59:08.942: INFO: Pod "pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06284ms Jun 2 10:59:10.946: INFO: Pod "pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007152905s Jun 2 10:59:12.950: INFO: Pod "pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011075031s STEP: Saw pod success Jun 2 10:59:12.950: INFO: Pod "pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 10:59:12.952: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 2 10:59:12.988: INFO: Waiting for pod pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 10:59:13.003: INFO: Pod pod-projected-secrets-157ae52e-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 10:59:13.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6pjzj" for this suite. Jun 2 10:59:19.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 10:59:19.133: INFO: namespace: e2e-tests-projected-6pjzj, resource: bindings, ignored listing per whitelist Jun 2 10:59:19.139: INFO: namespace e2e-tests-projected-6pjzj deletion completed in 6.132076332s • [SLOW TEST:10.379 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 10:59:19.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-gtdwz I0602 10:59:19.224061 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-gtdwz, replica count: 1 I0602 10:59:20.274494 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 10:59:21.274704 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 10:59:22.274915 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 10:59:23.275174 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 2 10:59:23.450: INFO: Created: latency-svc-6nqdr Jun 2 10:59:23.459: INFO: Got endpoints: latency-svc-6nqdr [84.547592ms] Jun 2 10:59:23.486: INFO: Created: latency-svc-9x7r7 Jun 2 10:59:23.509: INFO: Got endpoints: latency-svc-9x7r7 [49.61564ms] Jun 2 10:59:23.540: INFO: Created: latency-svc-zs2kh Jun 2 10:59:23.610: INFO: Got endpoints: latency-svc-zs2kh [150.719617ms] Jun 2 10:59:23.648: INFO: Created: latency-svc-fvz4b Jun 2 10:59:23.671: INFO: Got endpoints: latency-svc-fvz4b [211.600601ms] Jun 2 10:59:23.702: INFO: Created: latency-svc-thf7d Jun 2 10:59:23.792: INFO: Got endpoints: latency-svc-thf7d [331.881341ms] Jun 2 10:59:23.833: INFO: Created: latency-svc-5vljf Jun 2 10:59:23.855: INFO: Got endpoints: latency-svc-5vljf [395.339128ms] Jun 2 10:59:23.910: INFO: Created: latency-svc-6mhpn Jun 2 10:59:23.914: INFO: Got endpoints: latency-svc-6mhpn [454.437789ms] Jun 2 10:59:24.463: INFO: Created: latency-svc-lbxf2 Jun 2 10:59:24.479: INFO: Got endpoints: latency-svc-lbxf2 [1.019032126s] Jun 2 10:59:24.499: INFO: Created: latency-svc-sjlll Jun 2 10:59:24.509: INFO: Got endpoints: latency-svc-sjlll [1.049331978s] Jun 2 10:59:25.108: INFO: Created: latency-svc-dst9q Jun 2 10:59:25.111: INFO: Got endpoints: latency-svc-dst9q [1.651133131s] Jun 2 10:59:25.662: INFO: Created: latency-svc-zrbpc Jun 2 10:59:25.677: INFO: Got endpoints: latency-svc-zrbpc [2.217650362s] Jun 2 10:59:26.196: INFO: Created: latency-svc-zdcvz Jun 2 10:59:26.211: INFO: Got endpoints: latency-svc-zdcvz [2.751070645s] Jun 2 10:59:26.720: INFO: Created: latency-svc-8zsqf Jun 2 10:59:26.726: INFO: Got endpoints: latency-svc-8zsqf [3.266298804s] Jun 2 10:59:26.803: INFO: Created: latency-svc-6n5ng Jun 2 10:59:26.805: INFO: Got endpoints: latency-svc-6n5ng [3.345566977s] Jun 2 10:59:27.380: INFO: Created: latency-svc-jw9vf Jun 2 10:59:27.391: INFO: Got endpoints: latency-svc-jw9vf [3.931392503s] Jun 2 10:59:27.916: INFO: Created: latency-svc-mdsrw Jun 2 10:59:27.987: INFO: Got endpoints: latency-svc-mdsrw [4.52770538s] Jun 2 10:59:28.448: INFO: Created: latency-svc-fjccr Jun 2 10:59:28.458: INFO: Got endpoints: latency-svc-fjccr [4.949260902s] Jun 2 10:59:29.414: INFO: Created: latency-svc-w54t4 Jun 2 10:59:29.422: INFO: Got endpoints: latency-svc-w54t4 [5.811930581s] Jun 2 10:59:29.891: INFO: Created: latency-svc-szp5m Jun 2 10:59:29.902: INFO: Got endpoints: latency-svc-szp5m [6.230638839s] Jun 2 10:59:30.432: INFO: Created: latency-svc-2pgc4 Jun 2 10:59:30.448: INFO: Got endpoints: latency-svc-2pgc4 [6.656183759s] Jun 2 10:59:31.007: INFO: Created: latency-svc-c2kzf Jun 2 10:59:31.010: INFO: Got endpoints: latency-svc-c2kzf [7.154757945s] Jun 2 10:59:31.504: INFO: Created: latency-svc-gr6t8 Jun 2 10:59:31.514: INFO: Got endpoints: latency-svc-gr6t8 [7.599694342s] Jun 2 10:59:31.544: INFO: Created: latency-svc-6thvd Jun 2 10:59:31.557: INFO: Got endpoints: latency-svc-6thvd [7.077980449s] Jun 2 10:59:32.079: INFO: Created: latency-svc-lqkkb Jun 2 10:59:32.090: INFO: Got endpoints: latency-svc-lqkkb [7.580794812s] Jun 2 10:59:32.108: INFO: Created: latency-svc-trwkk Jun 2 10:59:32.120: INFO: Got endpoints: latency-svc-trwkk [7.008989337s] Jun 2 10:59:32.707: INFO: Created: latency-svc-h6hb9 Jun 2 10:59:32.711: INFO: Got endpoints: latency-svc-h6hb9 [7.033874688s] Jun 2 10:59:32.738: INFO: Created: latency-svc-kj9ds Jun 2 10:59:32.750: INFO: Got endpoints: latency-svc-kj9ds [6.538977651s] Jun 2 10:59:32.768: INFO: Created: latency-svc-q4rr2 Jun 2 10:59:32.790: INFO: Got endpoints: latency-svc-q4rr2 [6.064305192s] Jun 2 10:59:32.839: INFO: Created: latency-svc-qmtcr Jun 2 10:59:32.869: INFO: Got endpoints: latency-svc-qmtcr [6.063190341s] Jun 2 10:59:33.399: INFO: Created: latency-svc-kckzt Jun 2 10:59:33.421: INFO: Got endpoints: latency-svc-kckzt [6.029936173s] Jun 2 10:59:33.463: INFO: Created: latency-svc-xl9dg Jun 2 10:59:33.534: INFO: Created: latency-svc-9tx6k Jun 2 10:59:33.536: INFO: Got endpoints: latency-svc-xl9dg [5.548180397s] Jun 2 10:59:33.541: INFO: Got endpoints: latency-svc-9tx6k [5.082871509s] Jun 2 10:59:33.559: INFO: Created: latency-svc-mb6p8 Jun 2 10:59:33.578: INFO: Got endpoints: latency-svc-mb6p8 [4.155239575s] Jun 2 10:59:34.079: INFO: Created: latency-svc-xwtfm Jun 2 10:59:34.093: INFO: Got endpoints: latency-svc-xwtfm [4.19115145s] Jun 2 10:59:34.111: INFO: Created: latency-svc-crtdb Jun 2 10:59:34.134: INFO: Got endpoints: latency-svc-crtdb [3.686552057s] Jun 2 10:59:34.186: INFO: Created: latency-svc-zwws7 Jun 2 10:59:34.190: INFO: Got endpoints: latency-svc-zwws7 [3.179685602s] Jun 2 10:59:34.217: INFO: Created: latency-svc-ds58b Jun 2 10:59:34.232: INFO: Got endpoints: latency-svc-ds58b [2.717787473s] Jun 2 10:59:34.740: INFO: Created: latency-svc-2dd42 Jun 2 10:59:34.747: INFO: Got endpoints: latency-svc-2dd42 [3.190215172s] Jun 2 10:59:34.774: INFO: Created: latency-svc-b8bvj Jun 2 10:59:34.790: INFO: Got endpoints: latency-svc-b8bvj [2.699644987s] Jun 2 10:59:35.273: INFO: Created: latency-svc-mwlnx Jun 2 10:59:35.286: INFO: Got endpoints: latency-svc-mwlnx [3.166461436s] Jun 2 10:59:35.794: INFO: Created: latency-svc-wl5b7 Jun 2 10:59:35.803: INFO: Got endpoints: latency-svc-wl5b7 [3.091296441s] Jun 2 10:59:36.310: INFO: Created: latency-svc-mktw7 Jun 2 10:59:36.323: INFO: Got endpoints: latency-svc-mktw7 [3.573437815s] Jun 2 10:59:36.769: INFO: Created: latency-svc-nlsjm Jun 2 10:59:36.779: INFO: Got endpoints: latency-svc-nlsjm [3.98811254s] Jun 2 10:59:37.285: INFO: Created: latency-svc-q5k6h Jun 2 10:59:37.295: INFO: Got endpoints: latency-svc-q5k6h [4.426168439s] Jun 2 10:59:37.758: INFO: Created: latency-svc-mfdkz Jun 2 10:59:37.774: INFO: Got endpoints: latency-svc-mfdkz [4.352932056s] Jun 2 10:59:38.232: INFO: Created: latency-svc-fpqhh Jun 2 10:59:38.247: INFO: Got endpoints: latency-svc-fpqhh [4.71140223s] Jun 2 10:59:38.753: INFO: Created: latency-svc-zlv2g Jun 2 10:59:38.757: INFO: Got endpoints: latency-svc-zlv2g [5.215614474s] Jun 2 10:59:38.783: INFO: Created: latency-svc-p7bzk Jun 2 10:59:38.793: INFO: Got endpoints: latency-svc-p7bzk [5.21512848s] Jun 2 10:59:39.321: INFO: Created: latency-svc-j65bz Jun 2 10:59:39.332: INFO: Got endpoints: latency-svc-j65bz [5.239133663s] Jun 2 10:59:39.833: INFO: Created: latency-svc-7vq9j Jun 2 10:59:39.837: INFO: Got endpoints: latency-svc-7vq9j [5.702337777s] Jun 2 10:59:40.420: INFO: Created: latency-svc-rhbg6 Jun 2 10:59:40.429: INFO: Got endpoints: latency-svc-rhbg6 [6.239058462s] Jun 2 10:59:40.957: INFO: Created: latency-svc-df552 Jun 2 10:59:40.999: INFO: Got endpoints: latency-svc-df552 [6.767493054s] Jun 2 10:59:41.534: INFO: Created: latency-svc-4wm87 Jun 2 10:59:41.549: INFO: Got endpoints: latency-svc-4wm87 [6.802284441s] Jun 2 10:59:42.073: INFO: Created: latency-svc-rxnmp Jun 2 10:59:42.089: INFO: Got endpoints: latency-svc-rxnmp [7.299358885s] Jun 2 10:59:42.594: INFO: Created: latency-svc-9f24g Jun 2 10:59:42.610: INFO: Got endpoints: latency-svc-9f24g [7.323849498s] Jun 2 10:59:42.636: INFO: Created: latency-svc-kv6sl Jun 2 10:59:42.676: INFO: Got endpoints: latency-svc-kv6sl [6.873392191s] Jun 2 10:59:43.132: INFO: Created: latency-svc-4zjbq Jun 2 10:59:43.144: INFO: Got endpoints: latency-svc-4zjbq [6.820144098s] Jun 2 10:59:43.613: INFO: Created: latency-svc-rt7c6 Jun 2 10:59:43.694: INFO: Got endpoints: latency-svc-rt7c6 [6.915490807s] Jun 2 10:59:44.108: INFO: Created: latency-svc-jpxt5 Jun 2 10:59:44.127: INFO: Got endpoints: latency-svc-jpxt5 [6.831762738s] Jun 2 10:59:44.887: INFO: Created: latency-svc-p6xgn Jun 2 10:59:44.889: INFO: Got endpoints: latency-svc-p6xgn [7.115163973s] Jun 2 10:59:45.368: INFO: Created: latency-svc-p8zr8 Jun 2 10:59:45.379: INFO: Got endpoints: latency-svc-p8zr8 [7.131961379s] Jun 2 10:59:45.869: INFO: Created: latency-svc-cq7hg Jun 2 10:59:45.901: INFO: Got endpoints: latency-svc-cq7hg [7.144093359s] Jun 2 10:59:46.332: INFO: Created: latency-svc-vg7zc Jun 2 10:59:46.338: INFO: Got endpoints: latency-svc-vg7zc [7.544806825s] Jun 2 10:59:46.404: INFO: Created: latency-svc-szckc Jun 2 10:59:46.467: INFO: Got endpoints: latency-svc-szckc [7.134301124s] Jun 2 10:59:46.879: INFO: Created: latency-svc-fmltv Jun 2 10:59:46.946: INFO: Got endpoints: latency-svc-fmltv [7.108739145s] Jun 2 10:59:47.416: INFO: Created: latency-svc-f2gxs Jun 2 10:59:47.429: INFO: Got endpoints: latency-svc-f2gxs [6.999687434s] Jun 2 10:59:47.908: INFO: Created: latency-svc-pzqsl Jun 2 10:59:47.931: INFO: Got endpoints: latency-svc-pzqsl [6.931473347s] Jun 2 10:59:48.441: INFO: Created: latency-svc-j6657 Jun 2 10:59:48.454: INFO: Got endpoints: latency-svc-j6657 [6.904604113s] Jun 2 10:59:48.970: INFO: Created: latency-svc-qpvll Jun 2 10:59:49.012: INFO: Got endpoints: latency-svc-qpvll [6.922428538s] Jun 2 10:59:49.503: INFO: Created: latency-svc-s85f9 Jun 2 10:59:49.518: INFO: Got endpoints: latency-svc-s85f9 [6.90716456s] Jun 2 10:59:50.052: INFO: Created: latency-svc-vbbxj Jun 2 10:59:50.055: INFO: Got endpoints: latency-svc-vbbxj [7.379130641s] Jun 2 10:59:50.108: INFO: Created: latency-svc-rfnd7 Jun 2 10:59:50.126: INFO: Got endpoints: latency-svc-rfnd7 [6.982439324s] Jun 2 10:59:50.628: INFO: Created: latency-svc-gnnb7 Jun 2 10:59:50.670: INFO: Got endpoints: latency-svc-gnnb7 [6.976153433s] Jun 2 10:59:51.149: INFO: Created: latency-svc-mb7k9 Jun 2 10:59:51.162: INFO: Got endpoints: latency-svc-mb7k9 [7.035844838s] Jun 2 10:59:51.712: INFO: Created: latency-svc-4fd8z Jun 2 10:59:51.747: INFO: Got endpoints: latency-svc-4fd8z [6.857759564s] Jun 2 10:59:52.227: INFO: Created: latency-svc-5sxch Jun 2 10:59:52.239: INFO: Got endpoints: latency-svc-5sxch [6.85957527s] Jun 2 10:59:52.810: INFO: Created: latency-svc-mfsz7 Jun 2 10:59:52.813: INFO: Got endpoints: latency-svc-mfsz7 [6.911785669s] Jun 2 10:59:53.377: INFO: Created: latency-svc-srjqn Jun 2 10:59:53.379: INFO: Got endpoints: latency-svc-srjqn [7.041416618s] Jun 2 10:59:53.887: INFO: Created: latency-svc-2zjvh Jun 2 10:59:53.942: INFO: Created: latency-svc-qzwbn Jun 2 10:59:54.020: INFO: Got endpoints: latency-svc-2zjvh [7.553052345s] Jun 2 10:59:54.026: INFO: Got endpoints: latency-svc-qzwbn [7.080739749s] Jun 2 10:59:54.480: INFO: Created: latency-svc-6dx4z Jun 2 10:59:54.504: INFO: Got endpoints: latency-svc-6dx4z [7.075775884s] Jun 2 10:59:55.062: INFO: Created: latency-svc-rf2dz Jun 2 10:59:55.063: INFO: Got endpoints: latency-svc-rf2dz [7.132469518s] Jun 2 10:59:55.570: INFO: Created: latency-svc-v28kf Jun 2 10:59:55.573: INFO: Got endpoints: latency-svc-v28kf [7.119189059s] Jun 2 10:59:56.080: INFO: Created: latency-svc-8n94v Jun 2 10:59:56.095: INFO: Got endpoints: latency-svc-8n94v [7.08346584s] Jun 2 10:59:56.565: INFO: Created: latency-svc-6rzsj Jun 2 10:59:56.574: INFO: Got endpoints: latency-svc-6rzsj [7.056318937s] Jun 2 10:59:56.635: INFO: Created: latency-svc-29cxw Jun 2 10:59:56.650: INFO: Got endpoints: latency-svc-29cxw [6.59469406s] Jun 2 10:59:56.680: INFO: Created: latency-svc-p6dbj Jun 2 10:59:56.695: INFO: Got endpoints: latency-svc-p6dbj [6.568647712s] Jun 2 10:59:56.716: INFO: Created: latency-svc-8x75q Jun 2 10:59:56.732: INFO: Got endpoints: latency-svc-8x75q [6.061292636s] Jun 2 10:59:56.791: INFO: Created: latency-svc-k5zt9 Jun 2 10:59:56.803: INFO: Got endpoints: latency-svc-k5zt9 [5.64055564s] Jun 2 10:59:57.354: INFO: Created: latency-svc-7jj62 Jun 2 10:59:57.375: INFO: Got endpoints: latency-svc-7jj62 [5.62799711s] Jun 2 10:59:57.923: INFO: Created: latency-svc-gdjts Jun 2 10:59:57.926: INFO: Got endpoints: latency-svc-gdjts [5.6869713s] Jun 2 10:59:58.393: INFO: Created: latency-svc-msds9 Jun 2 10:59:58.409: INFO: Got endpoints: latency-svc-msds9 [5.596131806s] Jun 2 10:59:58.893: INFO: Created: latency-svc-d2hk5 Jun 2 10:59:58.900: INFO: Got endpoints: latency-svc-d2hk5 [5.520819041s] Jun 2 10:59:59.546: INFO: Created: latency-svc-v2crj Jun 2 10:59:59.553: INFO: Got endpoints: latency-svc-v2crj [5.533596697s] Jun 2 10:59:59.580: INFO: Created: latency-svc-k2kkd Jun 2 10:59:59.607: INFO: Got endpoints: latency-svc-k2kkd [5.580868027s] Jun 2 11:00:00.034: INFO: Created: latency-svc-zxfm9 Jun 2 11:00:00.045: INFO: Got endpoints: latency-svc-zxfm9 [491.598669ms] Jun 2 11:00:00.557: INFO: Created: latency-svc-29rqw Jun 2 11:00:00.560: INFO: Got endpoints: latency-svc-29rqw [6.055371761s] Jun 2 11:00:01.071: INFO: Created: latency-svc-gnctp Jun 2 11:00:01.114: INFO: Got endpoints: latency-svc-gnctp [6.050138732s] Jun 2 11:00:01.526: INFO: Created: latency-svc-gl77f Jun 2 11:00:01.538: INFO: Got endpoints: latency-svc-gl77f [5.964478379s] Jun 2 11:00:02.204: INFO: Created: latency-svc-tnxb6 Jun 2 11:00:02.210: INFO: Got endpoints: latency-svc-tnxb6 [6.114470697s] Jun 2 11:00:02.659: INFO: Created: latency-svc-qt9ff Jun 2 11:00:02.665: INFO: Got endpoints: latency-svc-qt9ff [6.091503601s] Jun 2 11:00:03.129: INFO: Created: latency-svc-rc5w5 Jun 2 11:00:03.133: INFO: Got endpoints: latency-svc-rc5w5 [6.48290798s] Jun 2 11:00:03.204: INFO: Created: latency-svc-hz7wr Jun 2 11:00:03.210: INFO: Got endpoints: latency-svc-hz7wr [6.514984845s] Jun 2 11:00:03.725: INFO: Created: latency-svc-v77f4 Jun 2 11:00:03.772: INFO: Got endpoints: latency-svc-v77f4 [7.040419283s] Jun 2 11:00:04.193: INFO: Created: latency-svc-qxn94 Jun 2 11:00:04.208: INFO: Got endpoints: latency-svc-qxn94 [7.404681654s] Jun 2 11:00:04.702: INFO: Created: latency-svc-fzvhm Jun 2 11:00:04.754: INFO: Got endpoints: latency-svc-fzvhm [7.379150162s] Jun 2 11:00:05.210: INFO: Created: latency-svc-79rtw Jun 2 11:00:05.229: INFO: Got endpoints: latency-svc-79rtw [7.303626475s] Jun 2 11:00:05.674: INFO: Created: latency-svc-z878f Jun 2 11:00:05.691: INFO: Got endpoints: latency-svc-z878f [7.281187382s] Jun 2 11:00:06.140: INFO: Created: latency-svc-xrhw7 Jun 2 11:00:06.152: INFO: Got endpoints: latency-svc-xrhw7 [7.252101544s] Jun 2 11:00:06.586: INFO: Created: latency-svc-7t4cv Jun 2 11:00:06.590: INFO: Got endpoints: latency-svc-7t4cv [6.982255903s] Jun 2 11:00:06.619: INFO: Created: latency-svc-pfmts Jun 2 11:00:06.642: INFO: Got endpoints: latency-svc-pfmts [6.597255437s] Jun 2 11:00:07.070: INFO: Created: latency-svc-7gdd2 Jun 2 11:00:07.108: INFO: Got endpoints: latency-svc-7gdd2 [6.547746925s] Jun 2 11:00:07.536: INFO: Created: latency-svc-jsqr7 Jun 2 11:00:07.549: INFO: Got endpoints: latency-svc-jsqr7 [6.435315128s] Jun 2 11:00:08.021: INFO: Created: latency-svc-x9xnf Jun 2 11:00:08.060: INFO: Got endpoints: latency-svc-x9xnf [6.521742194s] Jun 2 11:00:08.531: INFO: Created: latency-svc-f8q7q Jun 2 11:00:08.544: INFO: Got endpoints: latency-svc-f8q7q [6.334376159s] Jun 2 11:00:09.059: INFO: Created: latency-svc-bcnzr Jun 2 11:00:09.072: INFO: Got endpoints: latency-svc-bcnzr [6.40607326s] Jun 2 11:00:09.549: INFO: Created: latency-svc-6vzrj Jun 2 11:00:09.569: INFO: Got endpoints: latency-svc-6vzrj [6.436019144s] Jun 2 11:00:10.043: INFO: Created: latency-svc-4n2mm Jun 2 11:00:10.048: INFO: Got endpoints: latency-svc-4n2mm [6.837928745s] Jun 2 11:00:10.070: INFO: Created: latency-svc-ppzdx Jun 2 11:00:10.078: INFO: Got endpoints: latency-svc-ppzdx [6.305818126s] Jun 2 11:00:10.533: INFO: Created: latency-svc-nvqs5 Jun 2 11:00:10.587: INFO: Got endpoints: latency-svc-nvqs5 [6.378934796s] Jun 2 11:00:11.011: INFO: Created: latency-svc-bqhz4 Jun 2 11:00:11.054: INFO: Got endpoints: latency-svc-bqhz4 [6.299604953s] Jun 2 11:00:11.538: INFO: Created: latency-svc-2rwpg Jun 2 11:00:11.547: INFO: Got endpoints: latency-svc-2rwpg [6.317437002s] Jun 2 11:00:12.127: INFO: Created: latency-svc-t6qhv Jun 2 11:00:12.130: INFO: Got endpoints: latency-svc-t6qhv [6.439422381s] Jun 2 11:00:12.708: INFO: Created: latency-svc-lpwsj Jun 2 11:00:12.711: INFO: Got endpoints: latency-svc-lpwsj [6.558849479s] Jun 2 11:00:13.222: INFO: Created: latency-svc-lwmfd Jun 2 11:00:13.270: INFO: Got endpoints: latency-svc-lwmfd [6.679841968s] Jun 2 11:00:13.711: INFO: Created: latency-svc-4kcmf Jun 2 11:00:13.760: INFO: Got endpoints: latency-svc-4kcmf [7.118018136s] Jun 2 11:00:13.785: INFO: Created: latency-svc-mc9db Jun 2 11:00:13.800: INFO: Got endpoints: latency-svc-mc9db [6.692508084s] Jun 2 11:00:14.277: INFO: Created: latency-svc-zvd6g Jun 2 11:00:14.295: INFO: Got endpoints: latency-svc-zvd6g [6.746212824s] Jun 2 11:00:14.745: INFO: Created: latency-svc-xx92v Jun 2 11:00:14.802: INFO: Got endpoints: latency-svc-xx92v [6.742810997s] Jun 2 11:00:15.301: INFO: Created: latency-svc-7knpn Jun 2 11:00:15.316: INFO: Got endpoints: latency-svc-7knpn [6.772309167s] Jun 2 11:00:15.841: INFO: Created: latency-svc-rgqhq Jun 2 11:00:15.856: INFO: Got endpoints: latency-svc-rgqhq [6.784181219s] Jun 2 11:00:16.577: INFO: Created: latency-svc-r52wl Jun 2 11:00:16.593: INFO: Got endpoints: latency-svc-r52wl [7.023902673s] Jun 2 11:00:17.046: INFO: Created: latency-svc-7t2l6 Jun 2 11:00:17.126: INFO: Got endpoints: latency-svc-7t2l6 [7.078242347s] Jun 2 11:00:17.128: INFO: Created: latency-svc-l574z Jun 2 11:00:17.132: INFO: Got endpoints: latency-svc-l574z [7.053871963s] Jun 2 11:00:17.152: INFO: Created: latency-svc-gml7l Jun 2 11:00:17.169: INFO: Got endpoints: latency-svc-gml7l [6.582398609s] Jun 2 11:00:17.195: INFO: Created: latency-svc-m2ccp Jun 2 11:00:17.211: INFO: Got endpoints: latency-svc-m2ccp [6.156867446s] Jun 2 11:00:17.282: INFO: Created: latency-svc-75jc7 Jun 2 11:00:17.295: INFO: Got endpoints: latency-svc-75jc7 [5.748372704s] Jun 2 11:00:17.332: INFO: Created: latency-svc-n9brv Jun 2 11:00:17.343: INFO: Got endpoints: latency-svc-n9brv [5.213407146s] Jun 2 11:00:17.368: INFO: Created: latency-svc-7b8v4 Jun 2 11:00:17.380: INFO: Got endpoints: latency-svc-7b8v4 [4.668611292s] Jun 2 11:00:17.431: INFO: Created: latency-svc-2nrgc Jun 2 11:00:17.440: INFO: Got endpoints: latency-svc-2nrgc [4.169901919s] Jun 2 11:00:17.501: INFO: Created: latency-svc-jlk5r Jun 2 11:00:17.526: INFO: Got endpoints: latency-svc-jlk5r [3.765114484s] Jun 2 11:00:17.596: INFO: Created: latency-svc-2p77c Jun 2 11:00:17.609: INFO: Got endpoints: latency-svc-2p77c [3.808346874s] Jun 2 11:00:17.632: INFO: Created: latency-svc-n9nmj Jun 2 11:00:17.658: INFO: Got endpoints: latency-svc-n9nmj [3.362631381s] Jun 2 11:00:17.675: INFO: Created: latency-svc-wd9vw Jun 2 11:00:17.737: INFO: Got endpoints: latency-svc-wd9vw [2.934339959s] Jun 2 11:00:17.739: INFO: Created: latency-svc-t9ttv Jun 2 11:00:17.747: INFO: Got endpoints: latency-svc-t9ttv [2.430745313s] Jun 2 11:00:17.782: INFO: Created: latency-svc-q4klr Jun 2 11:00:17.802: INFO: Got endpoints: latency-svc-q4klr [1.945913245s] Jun 2 11:00:17.893: INFO: Created: latency-svc-sthwb Jun 2 11:00:17.907: INFO: Got endpoints: latency-svc-sthwb [1.314033768s] Jun 2 11:00:17.958: INFO: Created: latency-svc-dxt5m Jun 2 11:00:17.970: INFO: Got endpoints: latency-svc-dxt5m [844.145931ms] Jun 2 11:00:18.024: INFO: Created: latency-svc-4glnd Jun 2 11:00:18.027: INFO: Got endpoints: latency-svc-4glnd [895.324404ms] Jun 2 11:00:18.070: INFO: Created: latency-svc-lnqvd Jun 2 11:00:18.086: INFO: Got endpoints: latency-svc-lnqvd [916.261151ms] Jun 2 11:00:18.112: INFO: Created: latency-svc-8npmz Jun 2 11:00:18.180: INFO: Got endpoints: latency-svc-8npmz [969.21376ms] Jun 2 11:00:18.202: INFO: Created: latency-svc-7kfmr Jun 2 11:00:18.225: INFO: Got endpoints: latency-svc-7kfmr [929.574647ms] Jun 2 11:00:18.336: INFO: Created: latency-svc-f8f6w Jun 2 11:00:18.343: INFO: Got endpoints: latency-svc-f8f6w [999.798261ms] Jun 2 11:00:18.363: INFO: Created: latency-svc-xfgvt Jun 2 11:00:18.374: INFO: Got endpoints: latency-svc-xfgvt [994.146978ms] Jun 2 11:00:18.402: INFO: Created: latency-svc-57rrf Jun 2 11:00:18.416: INFO: Got endpoints: latency-svc-57rrf [976.144423ms] Jun 2 11:00:18.510: INFO: Created: latency-svc-hkqrd Jun 2 11:00:18.512: INFO: Got endpoints: latency-svc-hkqrd [986.311461ms] Jun 2 11:00:18.544: INFO: Created: latency-svc-2kfbf Jun 2 11:00:18.597: INFO: Got endpoints: latency-svc-2kfbf [988.82729ms] Jun 2 11:00:18.671: INFO: Created: latency-svc-p49qp Jun 2 11:00:18.675: INFO: Got endpoints: latency-svc-p49qp [1.016606837s] Jun 2 11:00:18.701: INFO: Created: latency-svc-fqpdn Jun 2 11:00:18.717: INFO: Got endpoints: latency-svc-fqpdn [980.134289ms] Jun 2 11:00:18.737: INFO: Created: latency-svc-d527q Jun 2 11:00:18.759: INFO: Got endpoints: latency-svc-d527q [1.011796272s] Jun 2 11:00:18.821: INFO: Created: latency-svc-lkx9t Jun 2 11:00:18.825: INFO: Got endpoints: latency-svc-lkx9t [1.023487092s] Jun 2 11:00:18.851: INFO: Created: latency-svc-jv2xw Jun 2 11:00:18.862: INFO: Got endpoints: latency-svc-jv2xw [954.837514ms] Jun 2 11:00:18.885: INFO: Created: latency-svc-gk7vv Jun 2 11:00:18.898: INFO: Got endpoints: latency-svc-gk7vv [927.604641ms] Jun 2 11:00:18.959: INFO: Created: latency-svc-29mr4 Jun 2 11:00:18.962: INFO: Got endpoints: latency-svc-29mr4 [934.447406ms] Jun 2 11:00:18.995: INFO: Created: latency-svc-k76qb Jun 2 11:00:19.017: INFO: Got endpoints: latency-svc-k76qb [931.800188ms] Jun 2 11:00:19.047: INFO: Created: latency-svc-bg89g Jun 2 11:00:19.102: INFO: Got endpoints: latency-svc-bg89g [921.833335ms] Jun 2 11:00:19.104: INFO: Created: latency-svc-fd5m9 Jun 2 11:00:19.115: INFO: Got endpoints: latency-svc-fd5m9 [890.231378ms] Jun 2 11:00:19.144: INFO: Created: latency-svc-lhzk9 Jun 2 11:00:19.164: INFO: Got endpoints: latency-svc-lhzk9 [820.769966ms] Jun 2 11:00:19.252: INFO: Created: latency-svc-4qwj7 Jun 2 11:00:19.255: INFO: Got endpoints: latency-svc-4qwj7 [881.036877ms] Jun 2 11:00:19.281: INFO: Created: latency-svc-42ckw Jun 2 11:00:19.296: INFO: Got endpoints: latency-svc-42ckw [880.508682ms] Jun 2 11:00:19.317: INFO: Created: latency-svc-stt79 Jun 2 11:00:19.345: INFO: Got endpoints: latency-svc-stt79 [832.592994ms] Jun 2 11:00:19.408: INFO: Created: latency-svc-qmtcq Jun 2 11:00:19.417: INFO: Got endpoints: latency-svc-qmtcq [819.468867ms] Jun 2 11:00:19.438: INFO: Created: latency-svc-ccf7m Jun 2 11:00:19.453: INFO: Got endpoints: latency-svc-ccf7m [778.655195ms] Jun 2 11:00:19.474: INFO: Created: latency-svc-zfgmn Jun 2 11:00:19.489: INFO: Got endpoints: latency-svc-zfgmn [772.091605ms] Jun 2 11:00:19.587: INFO: Created: latency-svc-468r2 Jun 2 11:00:19.618: INFO: Created: latency-svc-dxh77 Jun 2 11:00:19.618: INFO: Got endpoints: latency-svc-468r2 [858.487774ms] Jun 2 11:00:19.635: INFO: Got endpoints: latency-svc-dxh77 [809.385421ms] Jun 2 11:00:19.660: INFO: Created: latency-svc-czskq Jun 2 11:00:19.749: INFO: Got endpoints: latency-svc-czskq [886.945325ms] Jun 2 11:00:19.762: INFO: Created: latency-svc-tqnhk Jun 2 11:00:19.778: INFO: Got endpoints: latency-svc-tqnhk [880.318168ms] Jun 2 11:00:19.797: INFO: Created: latency-svc-g9pqt Jun 2 11:00:19.809: INFO: Got endpoints: latency-svc-g9pqt [847.035656ms] Jun 2 11:00:19.827: INFO: Created: latency-svc-rp2p7 Jun 2 11:00:19.839: INFO: Got endpoints: latency-svc-rp2p7 [821.254158ms] Jun 2 11:00:19.899: INFO: Created: latency-svc-bdfjm Jun 2 11:00:19.902: INFO: Got endpoints: latency-svc-bdfjm [799.653135ms] Jun 2 11:00:19.954: INFO: Created: latency-svc-p2qqd Jun 2 11:00:19.977: INFO: Got endpoints: latency-svc-p2qqd [861.736367ms] Jun 2 11:00:20.042: INFO: Created: latency-svc-b47kg Jun 2 11:00:20.067: INFO: Got endpoints: latency-svc-b47kg [902.547943ms] Jun 2 11:00:20.116: INFO: Created: latency-svc-ghght Jun 2 11:00:20.128: INFO: Got endpoints: latency-svc-ghght [872.647284ms] Jun 2 11:00:20.193: INFO: Created: latency-svc-qd224 Jun 2 11:00:20.247: INFO: Got endpoints: latency-svc-qd224 [950.426379ms] Jun 2 11:00:20.248: INFO: Created: latency-svc-cdgzp Jun 2 11:00:20.277: INFO: Got endpoints: latency-svc-cdgzp [932.237028ms] Jun 2 11:00:20.290: INFO: Created: latency-svc-j6w5z Jun 2 11:00:20.330: INFO: Got endpoints: latency-svc-j6w5z [913.340236ms] Jun 2 11:00:20.344: INFO: Created: latency-svc-r8npb Jun 2 11:00:20.363: INFO: Got endpoints: latency-svc-r8npb [909.377281ms] Jun 2 11:00:20.398: INFO: Created: latency-svc-5plwz Jun 2 11:00:20.422: INFO: Got endpoints: latency-svc-5plwz [932.288488ms] Jun 2 11:00:20.486: INFO: Created: latency-svc-8njld Jun 2 11:00:20.488: INFO: Got endpoints: latency-svc-8njld [870.374158ms] Jun 2 11:00:20.531: INFO: Created: latency-svc-ggfcb Jun 2 11:00:20.541: INFO: Got endpoints: latency-svc-ggfcb [905.947801ms] Jun 2 11:00:20.585: INFO: Created: latency-svc-s8xqq Jun 2 11:00:20.642: INFO: Got endpoints: latency-svc-s8xqq [892.484178ms] Jun 2 11:00:20.643: INFO: Created: latency-svc-ddkds Jun 2 11:00:20.650: INFO: Got endpoints: latency-svc-ddkds [871.438729ms] Jun 2 11:00:20.703: INFO: Created: latency-svc-qth4s Jun 2 11:00:20.716: INFO: Got endpoints: latency-svc-qth4s [906.977812ms] Jun 2 11:00:20.733: INFO: Created: latency-svc-8t597 Jun 2 11:00:20.773: INFO: Got endpoints: latency-svc-8t597 [934.207783ms] Jun 2 11:00:20.788: INFO: Created: latency-svc-8xv67 Jun 2 11:00:20.820: INFO: Got endpoints: latency-svc-8xv67 [918.079319ms] Jun 2 11:00:20.842: INFO: Created: latency-svc-9hwjw Jun 2 11:00:20.866: INFO: Got endpoints: latency-svc-9hwjw [888.814051ms] Jun 2 11:00:20.923: INFO: Created: latency-svc-5fgb2 Jun 2 11:00:20.926: INFO: Got endpoints: latency-svc-5fgb2 [858.765746ms] Jun 2 11:00:20.967: INFO: Created: latency-svc-rb2ht Jun 2 11:00:20.981: INFO: Got endpoints: latency-svc-rb2ht [853.69932ms] Jun 2 11:00:21.002: INFO: Created: latency-svc-wqvrp Jun 2 11:00:21.018: INFO: Got endpoints: latency-svc-wqvrp [770.947291ms] Jun 2 11:00:21.064: INFO: Created: latency-svc-wrzbv Jun 2 11:00:21.078: INFO: Got endpoints: latency-svc-wrzbv [800.869254ms] Jun 2 11:00:21.078: INFO: Latencies: [49.61564ms 150.719617ms 211.600601ms 331.881341ms 395.339128ms 454.437789ms 491.598669ms 770.947291ms 772.091605ms 778.655195ms 799.653135ms 800.869254ms 809.385421ms 819.468867ms 820.769966ms 821.254158ms 832.592994ms 844.145931ms 847.035656ms 853.69932ms 858.487774ms 858.765746ms 861.736367ms 870.374158ms 871.438729ms 872.647284ms 880.318168ms 880.508682ms 881.036877ms 886.945325ms 888.814051ms 890.231378ms 892.484178ms 895.324404ms 902.547943ms 905.947801ms 906.977812ms 909.377281ms 913.340236ms 916.261151ms 918.079319ms 921.833335ms 927.604641ms 929.574647ms 931.800188ms 932.237028ms 932.288488ms 934.207783ms 934.447406ms 950.426379ms 954.837514ms 969.21376ms 976.144423ms 980.134289ms 986.311461ms 988.82729ms 994.146978ms 999.798261ms 1.011796272s 1.016606837s 1.019032126s 1.023487092s 1.049331978s 1.314033768s 1.651133131s 1.945913245s 2.217650362s 2.430745313s 2.699644987s 2.717787473s 2.751070645s 2.934339959s 3.091296441s 3.166461436s 3.179685602s 3.190215172s 3.266298804s 3.345566977s 3.362631381s 3.573437815s 3.686552057s 3.765114484s 3.808346874s 3.931392503s 3.98811254s 4.155239575s 4.169901919s 4.19115145s 4.352932056s 4.426168439s 4.52770538s 4.668611292s 4.71140223s 4.949260902s 5.082871509s 5.213407146s 5.21512848s 5.215614474s 5.239133663s 5.520819041s 5.533596697s 5.548180397s 5.580868027s 5.596131806s 5.62799711s 5.64055564s 5.6869713s 5.702337777s 5.748372704s 5.811930581s 5.964478379s 6.029936173s 6.050138732s 6.055371761s 6.061292636s 6.063190341s 6.064305192s 6.091503601s 6.114470697s 6.156867446s 6.230638839s 6.239058462s 6.299604953s 6.305818126s 6.317437002s 6.334376159s 6.378934796s 6.40607326s 6.435315128s 6.436019144s 6.439422381s 6.48290798s 6.514984845s 6.521742194s 6.538977651s 6.547746925s 6.558849479s 6.568647712s 6.582398609s 6.59469406s 6.597255437s 6.656183759s 6.679841968s 6.692508084s 6.742810997s 6.746212824s 6.767493054s 6.772309167s 6.784181219s 6.802284441s 6.820144098s 6.831762738s 6.837928745s 6.857759564s 6.85957527s 6.873392191s 6.904604113s 6.90716456s 6.911785669s 6.915490807s 6.922428538s 6.931473347s 6.976153433s 6.982255903s 6.982439324s 6.999687434s 7.008989337s 7.023902673s 7.033874688s 7.035844838s 7.040419283s 7.041416618s 7.053871963s 7.056318937s 7.075775884s 7.077980449s 7.078242347s 7.080739749s 7.08346584s 7.108739145s 7.115163973s 7.118018136s 7.119189059s 7.131961379s 7.132469518s 7.134301124s 7.144093359s 7.154757945s 7.252101544s 7.281187382s 7.299358885s 7.303626475s 7.323849498s 7.379130641s 7.379150162s 7.404681654s 7.544806825s 7.553052345s 7.580794812s 7.599694342s] Jun 2 11:00:21.078: INFO: 50 %ile: 5.533596697s Jun 2 11:00:21.078: INFO: 90 %ile: 7.115163973s Jun 2 11:00:21.078: INFO: 99 %ile: 7.580794812s Jun 2 11:00:21.078: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:00:21.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-gtdwz" for this suite. Jun 2 11:00:45.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:00:45.126: INFO: namespace: e2e-tests-svc-latency-gtdwz, resource: bindings, ignored listing per whitelist Jun 2 11:00:45.171: INFO: namespace e2e-tests-svc-latency-gtdwz deletion completed in 24.086957888s • [SLOW TEST:86.032 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:00:45.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:00:45.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-d762n" to be "success or failure" Jun 2 11:00:45.316: INFO: Pod "downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.324266ms Jun 2 11:00:47.319: INFO: Pod "downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006695464s Jun 2 11:00:49.324: INFO: Pod "downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011239364s STEP: Saw pod success Jun 2 11:00:49.324: INFO: Pod "downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:00:49.328: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:00:49.347: INFO: Waiting for pod downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:00:49.390: INFO: Pod downwardapi-volume-4eecc14d-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:00:49.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d762n" for this suite. Jun 2 11:00:55.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:00:55.496: INFO: namespace: e2e-tests-projected-d762n, resource: bindings, ignored listing per whitelist Jun 2 11:00:55.526: INFO: namespace e2e-tests-projected-d762n deletion completed in 6.132323976s • [SLOW TEST:10.355 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:00:55.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-551780fb-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:00:55.660: INFO: Waiting up to 5m0s for pod "pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-tsms6" to be "success or failure" Jun 2 11:00:55.664: INFO: Pod "pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.87131ms Jun 2 11:00:57.668: INFO: Pod "pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008238568s Jun 2 11:00:59.672: INFO: Pod "pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011463824s STEP: Saw pod success Jun 2 11:00:59.672: INFO: Pod "pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:00:59.674: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 11:00:59.727: INFO: Waiting for pod pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:00:59.736: INFO: Pod pod-configmaps-55181ab3-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:00:59.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tsms6" for this suite. Jun 2 11:01:05.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:01:05.765: INFO: namespace: e2e-tests-configmap-tsms6, resource: bindings, ignored listing per whitelist Jun 2 11:01:05.833: INFO: namespace e2e-tests-configmap-tsms6 deletion completed in 6.092847128s • [SLOW TEST:10.307 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:01:05.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 11:01:05.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-kqsnk' Jun 2 11:01:06.043: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 2 11:01:06.043: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 2 11:01:06.061: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 2 11:01:06.151: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 2 11:01:06.192: INFO: scanned /root for discovery docs: Jun 2 11:01:06.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-kqsnk' Jun 2 11:01:22.044: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 2 11:01:22.044: INFO: stdout: "Created e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90\nScaling up e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 2 11:01:22.044: INFO: stdout: "Created e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90\nScaling up e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 2 11:01:22.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kqsnk' Jun 2 11:01:22.147: INFO: stderr: "" Jun 2 11:01:22.147: INFO: stdout: "e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90-4hb9w " Jun 2 11:01:22.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90-4hb9w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kqsnk' Jun 2 11:01:22.259: INFO: stderr: "" Jun 2 11:01:22.259: INFO: stdout: "true" Jun 2 11:01:22.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90-4hb9w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kqsnk' Jun 2 11:01:22.350: INFO: stderr: "" Jun 2 11:01:22.350: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 2 11:01:22.350: INFO: e2e-test-nginx-rc-32b24c9e48b937e383272073129d1b90-4hb9w is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jun 2 11:01:22.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-kqsnk' Jun 2 11:01:22.499: INFO: stderr: "" Jun 2 11:01:22.499: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:01:22.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kqsnk" for this suite. Jun 2 11:01:38.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:01:38.583: INFO: namespace: e2e-tests-kubectl-kqsnk, resource: bindings, ignored listing per whitelist Jun 2 11:01:38.666: INFO: namespace e2e-tests-kubectl-kqsnk deletion completed in 16.160495177s • [SLOW TEST:32.832 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:01:38.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 2 11:01:38.823: INFO: Waiting up to 5m0s for pod "pod-6ed13375-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-sdj4j" to be "success or failure" Jun 2 11:01:38.827: INFO: Pod "pod-6ed13375-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846119ms Jun 2 11:01:40.831: INFO: Pod "pod-6ed13375-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007967836s Jun 2 11:01:42.877: INFO: Pod "pod-6ed13375-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053259934s STEP: Saw pod success Jun 2 11:01:42.877: INFO: Pod "pod-6ed13375-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:01:42.879: INFO: Trying to get logs from node hunter-worker2 pod pod-6ed13375-a4c0-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:01:42.900: INFO: Waiting for pod pod-6ed13375-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:01:43.080: INFO: Pod pod-6ed13375-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:01:43.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sdj4j" for this suite. Jun 2 11:01:49.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:01:49.267: INFO: namespace: e2e-tests-emptydir-sdj4j, resource: bindings, ignored listing per whitelist Jun 2 11:01:49.322: INFO: namespace e2e-tests-emptydir-sdj4j deletion completed in 6.238538912s • [SLOW TEST:10.656 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:01:49.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 2 11:01:49.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:49.712: INFO: stderr: "" Jun 2 11:01:49.712: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 2 11:01:49.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:49.827: INFO: stderr: "" Jun 2 11:01:49.827: INFO: stdout: "update-demo-nautilus-4t86w update-demo-nautilus-j95kb " Jun 2 11:01:49.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:49.938: INFO: stderr: "" Jun 2 11:01:49.938: INFO: stdout: "" Jun 2 11:01:49.938: INFO: update-demo-nautilus-4t86w is created but not running Jun 2 11:01:54.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:55.044: INFO: stderr: "" Jun 2 11:01:55.044: INFO: stdout: "update-demo-nautilus-4t86w update-demo-nautilus-j95kb " Jun 2 11:01:55.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:55.156: INFO: stderr: "" Jun 2 11:01:55.156: INFO: stdout: "true" Jun 2 11:01:55.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:55.251: INFO: stderr: "" Jun 2 11:01:55.251: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:01:55.251: INFO: validating pod update-demo-nautilus-4t86w Jun 2 11:01:55.271: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:01:55.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:01:55.271: INFO: update-demo-nautilus-4t86w is verified up and running Jun 2 11:01:55.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j95kb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:55.384: INFO: stderr: "" Jun 2 11:01:55.384: INFO: stdout: "true" Jun 2 11:01:55.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j95kb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:55.484: INFO: stderr: "" Jun 2 11:01:55.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:01:55.484: INFO: validating pod update-demo-nautilus-j95kb Jun 2 11:01:55.509: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:01:55.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:01:55.509: INFO: update-demo-nautilus-j95kb is verified up and running STEP: scaling down the replication controller Jun 2 11:01:55.512: INFO: scanned /root for discovery docs: Jun 2 11:01:55.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:56.663: INFO: stderr: "" Jun 2 11:01:56.663: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 2 11:01:56.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:01:56.776: INFO: stderr: "" Jun 2 11:01:56.776: INFO: stdout: "update-demo-nautilus-4t86w update-demo-nautilus-j95kb " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 2 11:02:01.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:01.878: INFO: stderr: "" Jun 2 11:02:01.878: INFO: stdout: "update-demo-nautilus-4t86w " Jun 2 11:02:01.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:01.971: INFO: stderr: "" Jun 2 11:02:01.971: INFO: stdout: "true" Jun 2 11:02:01.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:02.073: INFO: stderr: "" Jun 2 11:02:02.073: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:02:02.073: INFO: validating pod update-demo-nautilus-4t86w Jun 2 11:02:02.077: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:02:02.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:02:02.077: INFO: update-demo-nautilus-4t86w is verified up and running STEP: scaling up the replication controller Jun 2 11:02:02.079: INFO: scanned /root for discovery docs: Jun 2 11:02:02.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:03.221: INFO: stderr: "" Jun 2 11:02:03.221: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 2 11:02:03.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:03.384: INFO: stderr: "" Jun 2 11:02:03.384: INFO: stdout: "update-demo-nautilus-4t86w update-demo-nautilus-6s42b " Jun 2 11:02:03.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:03.475: INFO: stderr: "" Jun 2 11:02:03.475: INFO: stdout: "true" Jun 2 11:02:03.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:03.579: INFO: stderr: "" Jun 2 11:02:03.579: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:02:03.579: INFO: validating pod update-demo-nautilus-4t86w Jun 2 11:02:03.583: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:02:03.583: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:02:03.583: INFO: update-demo-nautilus-4t86w is verified up and running Jun 2 11:02:03.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s42b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:03.741: INFO: stderr: "" Jun 2 11:02:03.741: INFO: stdout: "" Jun 2 11:02:03.741: INFO: update-demo-nautilus-6s42b is created but not running Jun 2 11:02:08.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:08.866: INFO: stderr: "" Jun 2 11:02:08.866: INFO: stdout: "update-demo-nautilus-4t86w update-demo-nautilus-6s42b " Jun 2 11:02:08.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:08.993: INFO: stderr: "" Jun 2 11:02:08.993: INFO: stdout: "true" Jun 2 11:02:08.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4t86w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:09.101: INFO: stderr: "" Jun 2 11:02:09.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:02:09.101: INFO: validating pod update-demo-nautilus-4t86w Jun 2 11:02:09.105: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:02:09.105: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:02:09.105: INFO: update-demo-nautilus-4t86w is verified up and running Jun 2 11:02:09.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s42b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:09.198: INFO: stderr: "" Jun 2 11:02:09.198: INFO: stdout: "true" Jun 2 11:02:09.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6s42b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:09.315: INFO: stderr: "" Jun 2 11:02:09.315: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:02:09.315: INFO: validating pod update-demo-nautilus-6s42b Jun 2 11:02:09.319: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:02:09.319: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:02:09.319: INFO: update-demo-nautilus-6s42b is verified up and running STEP: using delete to clean up resources Jun 2 11:02:09.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:09.439: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 11:02:09.439: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 2 11:02:09.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-cxfch' Jun 2 11:02:09.632: INFO: stderr: "No resources found.\n" Jun 2 11:02:09.632: INFO: stdout: "" Jun 2 11:02:09.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-cxfch -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 2 11:02:09.740: INFO: stderr: "" Jun 2 11:02:09.740: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:02:09.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cxfch" for this suite. Jun 2 11:02:15.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:02:15.968: INFO: namespace: e2e-tests-kubectl-cxfch, resource: bindings, ignored listing per whitelist Jun 2 11:02:15.974: INFO: namespace e2e-tests-kubectl-cxfch deletion completed in 6.230277531s • [SLOW TEST:26.652 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:02:15.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-85052168-a4c0-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-85052168-a4c0-11ea-889d-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:03:26.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sr256" for this suite. Jun 2 11:03:48.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:03:48.544: INFO: namespace: e2e-tests-configmap-sr256, resource: bindings, ignored listing per whitelist Jun 2 11:03:48.551: INFO: namespace e2e-tests-configmap-sr256 deletion completed in 22.126494555s • [SLOW TEST:92.577 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:03:48.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-bc338b14-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:03:48.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-8zvrg" to be "success or failure" Jun 2 11:03:48.694: INFO: Pod "pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011336ms Jun 2 11:03:50.737: INFO: Pod "pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044746717s Jun 2 11:03:52.741: INFO: Pod "pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049049907s STEP: Saw pod success Jun 2 11:03:52.742: INFO: Pod "pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:03:52.745: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 11:03:52.768: INFO: Waiting for pod pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:03:52.787: INFO: Pod pod-configmaps-bc3ad666-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:03:52.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8zvrg" for this suite. Jun 2 11:03:58.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:03:58.880: INFO: namespace: e2e-tests-configmap-8zvrg, resource: bindings, ignored listing per whitelist Jun 2 11:03:58.921: INFO: namespace e2e-tests-configmap-8zvrg deletion completed in 6.130376269s • [SLOW TEST:10.369 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:03:58.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:04:03.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ff782" for this suite. Jun 2 11:04:09.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:04:09.211: INFO: namespace: e2e-tests-emptydir-wrapper-ff782, resource: bindings, ignored listing per whitelist Jun 2 11:04:09.241: INFO: namespace e2e-tests-emptydir-wrapper-ff782 deletion completed in 6.073132148s • [SLOW TEST:10.320 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:04:09.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:04:09.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-zzlhv" to be "success or failure" Jun 2 11:04:09.442: INFO: Pod "downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.781237ms Jun 2 11:04:11.446: INFO: Pod "downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039741571s Jun 2 11:04:13.450: INFO: Pod "downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04419313s STEP: Saw pod success Jun 2 11:04:13.450: INFO: Pod "downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:04:13.453: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:04:13.538: INFO: Waiting for pod downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:04:13.543: INFO: Pod downwardapi-volume-c88c7a1f-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:04:13.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zzlhv" for this suite. Jun 2 11:04:19.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:04:19.601: INFO: namespace: e2e-tests-projected-zzlhv, resource: bindings, ignored listing per whitelist Jun 2 11:04:19.627: INFO: namespace e2e-tests-projected-zzlhv deletion completed in 6.08135994s • [SLOW TEST:10.386 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:04:19.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-cebada09-a4c0-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 11:04:19.768: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-fhtwf" to be "success or failure" Jun 2 11:04:19.771: INFO: Pod "pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504699ms Jun 2 11:04:21.933: INFO: Pod "pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165697851s Jun 2 11:04:23.938: INFO: Pod "pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169952186s STEP: Saw pod success Jun 2 11:04:23.938: INFO: Pod "pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:04:23.941: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 2 11:04:23.965: INFO: Waiting for pod pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:04:23.981: INFO: Pod pod-projected-secrets-cec00c0c-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:04:23.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fhtwf" for this suite. Jun 2 11:04:30.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:04:30.023: INFO: namespace: e2e-tests-projected-fhtwf, resource: bindings, ignored listing per whitelist Jun 2 11:04:30.078: INFO: namespace e2e-tests-projected-fhtwf deletion completed in 6.093191692s • [SLOW TEST:10.451 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:04:30.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:04:30.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-wcdgd" to be "success or failure" Jun 2 11:04:30.206: INFO: Pod "downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.336206ms Jun 2 11:04:32.210: INFO: Pod "downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030390579s Jun 2 11:04:34.214: INFO: Pod "downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034352334s STEP: Saw pod success Jun 2 11:04:34.215: INFO: Pod "downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:04:34.217: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:04:34.241: INFO: Waiting for pod downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:04:34.246: INFO: Pod downwardapi-volume-d4f1b9c8-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:04:34.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wcdgd" for this suite. Jun 2 11:04:40.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:04:40.280: INFO: namespace: e2e-tests-projected-wcdgd, resource: bindings, ignored listing per whitelist Jun 2 11:04:40.336: INFO: namespace e2e-tests-projected-wcdgd deletion completed in 6.08783399s • [SLOW TEST:10.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:04:40.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jun 2 11:04:40.474: INFO: Waiting up to 5m0s for pod "client-containers-db17fdca-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-containers-zg5d6" to be "success or failure" Jun 2 11:04:40.508: INFO: Pod "client-containers-db17fdca-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.268321ms Jun 2 11:04:42.512: INFO: Pod "client-containers-db17fdca-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038707348s Jun 2 11:04:44.516: INFO: Pod "client-containers-db17fdca-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042806608s STEP: Saw pod success Jun 2 11:04:44.516: INFO: Pod "client-containers-db17fdca-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:04:44.519: INFO: Trying to get logs from node hunter-worker2 pod client-containers-db17fdca-a4c0-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:04:44.558: INFO: Waiting for pod client-containers-db17fdca-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:04:44.568: INFO: Pod client-containers-db17fdca-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:04:44.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-zg5d6" for this suite. Jun 2 11:04:50.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:04:50.778: INFO: namespace: e2e-tests-containers-zg5d6, resource: bindings, ignored listing per whitelist Jun 2 11:04:50.788: INFO: namespace e2e-tests-containers-zg5d6 deletion completed in 6.215943549s • [SLOW TEST:10.451 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:04:50.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 2 11:04:50.909: INFO: Waiting up to 5m0s for pod "pod-e1507612-a4c0-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-nzqqk" to be "success or failure" Jun 2 11:04:50.912: INFO: Pod "pod-e1507612-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.630158ms Jun 2 11:04:52.916: INFO: Pod "pod-e1507612-a4c0-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006869745s Jun 2 11:04:54.921: INFO: Pod "pod-e1507612-a4c0-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010991304s STEP: Saw pod success Jun 2 11:04:54.921: INFO: Pod "pod-e1507612-a4c0-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:04:54.924: INFO: Trying to get logs from node hunter-worker pod pod-e1507612-a4c0-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:04:54.955: INFO: Waiting for pod pod-e1507612-a4c0-11ea-889d-0242ac110018 to disappear Jun 2 11:04:55.022: INFO: Pod pod-e1507612-a4c0-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:04:55.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nzqqk" for this suite. Jun 2 11:05:01.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:05:01.083: INFO: namespace: e2e-tests-emptydir-nzqqk, resource: bindings, ignored listing per whitelist Jun 2 11:05:01.149: INFO: namespace e2e-tests-emptydir-nzqqk deletion completed in 6.12217211s • [SLOW TEST:10.360 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:05:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 2 11:05:09.342: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:09.379: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:11.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:11.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:13.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:13.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:15.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:15.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:17.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:17.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:19.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:19.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:21.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:21.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:23.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:23.384: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:25.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:25.384: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:27.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:27.383: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:29.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:29.384: INFO: Pod pod-with-poststart-exec-hook still exists Jun 2 11:05:31.379: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 2 11:05:31.383: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:05:31.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9bknq" for this suite. Jun 2 11:05:53.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:05:53.457: INFO: namespace: e2e-tests-container-lifecycle-hook-9bknq, resource: bindings, ignored listing per whitelist Jun 2 11:05:53.483: INFO: namespace e2e-tests-container-lifecycle-hook-9bknq deletion completed in 22.096685311s • [SLOW TEST:52.334 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:05:53.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0602 11:06:03.635917 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 11:06:03.635: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:06:03.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xczxn" for this suite. Jun 2 11:06:09.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:06:09.739: INFO: namespace: e2e-tests-gc-xczxn, resource: bindings, ignored listing per whitelist Jun 2 11:06:11.280: INFO: namespace e2e-tests-gc-xczxn deletion completed in 7.615009031s • [SLOW TEST:17.796 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:06:11.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:06:15.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-mwhms" for this suite. Jun 2 11:06:21.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:06:21.630: INFO: namespace: e2e-tests-kubelet-test-mwhms, resource: bindings, ignored listing per whitelist Jun 2 11:06:21.641: INFO: namespace e2e-tests-kubelet-test-mwhms deletion completed in 6.099788245s • [SLOW TEST:10.361 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:06:21.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 2 11:06:29.593: INFO: 1 pods remaining Jun 2 11:06:29.593: INFO: 0 pods has nil DeletionTimestamp Jun 2 11:06:29.593: INFO: Jun 2 11:06:30.582: INFO: 0 pods remaining Jun 2 11:06:30.582: INFO: 0 pods has nil DeletionTimestamp Jun 2 11:06:30.582: INFO: STEP: Gathering metrics W0602 11:06:30.874678 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 11:06:30.874: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:06:30.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gmpqv" for this suite. Jun 2 11:06:37.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:06:37.120: INFO: namespace: e2e-tests-gc-gmpqv, resource: bindings, ignored listing per whitelist Jun 2 11:06:37.178: INFO: namespace e2e-tests-gc-gmpqv deletion completed in 6.300365629s • [SLOW TEST:15.537 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:06:37.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:06:37.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-pmrlj" to be "success or failure" Jun 2 11:06:37.360: INFO: Pod "downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.304765ms Jun 2 11:06:39.365: INFO: Pod "downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047933675s Jun 2 11:06:41.370: INFO: Pod "downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052965316s STEP: Saw pod success Jun 2 11:06:41.370: INFO: Pod "downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:06:41.372: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:06:41.403: INFO: Waiting for pod downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018 to disappear Jun 2 11:06:41.434: INFO: Pod downwardapi-volume-20bbbece-a4c1-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:06:41.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pmrlj" for this suite. Jun 2 11:06:47.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:06:47.469: INFO: namespace: e2e-tests-downward-api-pmrlj, resource: bindings, ignored listing per whitelist Jun 2 11:06:47.513: INFO: namespace e2e-tests-downward-api-pmrlj deletion completed in 6.074087058s • [SLOW TEST:10.334 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:06:47.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jun 2 11:06:47.689: INFO: Waiting up to 5m0s for pod "var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018" in namespace "e2e-tests-var-expansion-rvxtm" to be "success or failure" Jun 2 11:06:47.693: INFO: Pod "var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604413ms Jun 2 11:06:49.697: INFO: Pod "var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007985352s Jun 2 11:06:51.701: INFO: Pod "var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011972547s STEP: Saw pod success Jun 2 11:06:51.701: INFO: Pod "var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:06:51.704: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:06:51.778: INFO: Waiting for pod var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018 to disappear Jun 2 11:06:51.783: INFO: Pod var-expansion-26e9bc2e-a4c1-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:06:51.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-rvxtm" for this suite. Jun 2 11:06:57.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:06:57.813: INFO: namespace: e2e-tests-var-expansion-rvxtm, resource: bindings, ignored listing per whitelist Jun 2 11:06:57.879: INFO: namespace e2e-tests-var-expansion-rvxtm deletion completed in 6.093061872s • [SLOW TEST:10.365 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:06:57.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fkqfq Jun 2 11:07:02.033: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fkqfq STEP: checking the pod's current state and verifying that restartCount is present Jun 2 11:07:02.037: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:11:02.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fkqfq" for this suite. Jun 2 11:11:08.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:11:08.798: INFO: namespace: e2e-tests-container-probe-fkqfq, resource: bindings, ignored listing per whitelist Jun 2 11:11:08.824: INFO: namespace e2e-tests-container-probe-fkqfq deletion completed in 6.12350713s • [SLOW TEST:250.945 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:11:08.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 2 11:11:19.089: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.089: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.170644 6 log.go:172] (0xc000cb84d0) (0xc0015c8f00) Create stream I0602 11:11:19.170682 6 log.go:172] (0xc000cb84d0) (0xc0015c8f00) Stream added, broadcasting: 1 I0602 11:11:19.173602 6 log.go:172] (0xc000cb84d0) Reply frame received for 1 I0602 11:11:19.173672 6 log.go:172] (0xc000cb84d0) (0xc0015c8fa0) Create stream I0602 11:11:19.173692 6 log.go:172] (0xc000cb84d0) (0xc0015c8fa0) Stream added, broadcasting: 3 I0602 11:11:19.174782 6 log.go:172] (0xc000cb84d0) Reply frame received for 3 I0602 11:11:19.174825 6 log.go:172] (0xc000cb84d0) (0xc001b852c0) Create stream I0602 11:11:19.174838 6 log.go:172] (0xc000cb84d0) (0xc001b852c0) Stream added, broadcasting: 5 I0602 11:11:19.175802 6 log.go:172] (0xc000cb84d0) Reply frame received for 5 I0602 11:11:19.233403 6 log.go:172] (0xc000cb84d0) Data frame received for 5 I0602 11:11:19.233436 6 log.go:172] (0xc001b852c0) (5) Data frame handling I0602 11:11:19.233488 6 log.go:172] (0xc000cb84d0) Data frame received for 3 I0602 11:11:19.233540 6 log.go:172] (0xc0015c8fa0) (3) Data frame handling I0602 11:11:19.233567 6 log.go:172] (0xc0015c8fa0) (3) Data frame sent I0602 11:11:19.233585 6 log.go:172] (0xc000cb84d0) Data frame received for 3 I0602 11:11:19.233599 6 log.go:172] (0xc0015c8fa0) (3) Data frame handling I0602 11:11:19.235083 6 log.go:172] (0xc000cb84d0) Data frame received for 1 I0602 11:11:19.235107 6 log.go:172] (0xc0015c8f00) (1) Data frame handling I0602 11:11:19.235126 6 log.go:172] (0xc0015c8f00) (1) Data frame sent I0602 11:11:19.235144 6 log.go:172] (0xc000cb84d0) (0xc0015c8f00) Stream removed, broadcasting: 1 I0602 11:11:19.235157 6 log.go:172] (0xc000cb84d0) Go away received I0602 11:11:19.235382 6 log.go:172] (0xc000cb84d0) (0xc0015c8f00) Stream removed, broadcasting: 1 I0602 11:11:19.235411 6 log.go:172] (0xc000cb84d0) (0xc0015c8fa0) Stream removed, broadcasting: 3 I0602 11:11:19.235439 6 log.go:172] (0xc000cb84d0) (0xc001b852c0) Stream removed, broadcasting: 5 Jun 2 11:11:19.235: INFO: Exec stderr: "" Jun 2 11:11:19.235: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.235: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.285346 6 log.go:172] (0xc000c8b600) (0xc00069ebe0) Create stream I0602 11:11:19.285381 6 log.go:172] (0xc000c8b600) (0xc00069ebe0) Stream added, broadcasting: 1 I0602 11:11:19.287551 6 log.go:172] (0xc000c8b600) Reply frame received for 1 I0602 11:11:19.287606 6 log.go:172] (0xc000c8b600) (0xc001b85360) Create stream I0602 11:11:19.287623 6 log.go:172] (0xc000c8b600) (0xc001b85360) Stream added, broadcasting: 3 I0602 11:11:19.288634 6 log.go:172] (0xc000c8b600) Reply frame received for 3 I0602 11:11:19.288669 6 log.go:172] (0xc000c8b600) (0xc00069ec80) Create stream I0602 11:11:19.288680 6 log.go:172] (0xc000c8b600) (0xc00069ec80) Stream added, broadcasting: 5 I0602 11:11:19.289781 6 log.go:172] (0xc000c8b600) Reply frame received for 5 I0602 11:11:19.339296 6 log.go:172] (0xc000c8b600) Data frame received for 3 I0602 11:11:19.339344 6 log.go:172] (0xc001b85360) (3) Data frame handling I0602 11:11:19.339379 6 log.go:172] (0xc000c8b600) Data frame received for 5 I0602 11:11:19.339402 6 log.go:172] (0xc00069ec80) (5) Data frame handling I0602 11:11:19.339422 6 log.go:172] (0xc001b85360) (3) Data frame sent I0602 11:11:19.339431 6 log.go:172] (0xc000c8b600) Data frame received for 3 I0602 11:11:19.339440 6 log.go:172] (0xc001b85360) (3) Data frame handling I0602 11:11:19.340635 6 log.go:172] (0xc000c8b600) Data frame received for 1 I0602 11:11:19.340647 6 log.go:172] (0xc00069ebe0) (1) Data frame handling I0602 11:11:19.340661 6 log.go:172] (0xc00069ebe0) (1) Data frame sent I0602 11:11:19.340672 6 log.go:172] (0xc000c8b600) (0xc00069ebe0) Stream removed, broadcasting: 1 I0602 11:11:19.340694 6 log.go:172] (0xc000c8b600) Go away received I0602 11:11:19.340803 6 log.go:172] (0xc000c8b600) (0xc00069ebe0) Stream removed, broadcasting: 1 I0602 11:11:19.340817 6 log.go:172] (0xc000c8b600) (0xc001b85360) Stream removed, broadcasting: 3 I0602 11:11:19.340823 6 log.go:172] (0xc000c8b600) (0xc00069ec80) Stream removed, broadcasting: 5 Jun 2 11:11:19.340: INFO: Exec stderr: "" Jun 2 11:11:19.340: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.340: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.366325 6 log.go:172] (0xc000cb8a50) (0xc0015c9220) Create stream I0602 11:11:19.366354 6 log.go:172] (0xc000cb8a50) (0xc0015c9220) Stream added, broadcasting: 1 I0602 11:11:19.369866 6 log.go:172] (0xc000cb8a50) Reply frame received for 1 I0602 11:11:19.369909 6 log.go:172] (0xc000cb8a50) (0xc0015c92c0) Create stream I0602 11:11:19.369924 6 log.go:172] (0xc000cb8a50) (0xc0015c92c0) Stream added, broadcasting: 3 I0602 11:11:19.370883 6 log.go:172] (0xc000cb8a50) Reply frame received for 3 I0602 11:11:19.370919 6 log.go:172] (0xc000cb8a50) (0xc0015c9360) Create stream I0602 11:11:19.370930 6 log.go:172] (0xc000cb8a50) (0xc0015c9360) Stream added, broadcasting: 5 I0602 11:11:19.371776 6 log.go:172] (0xc000cb8a50) Reply frame received for 5 I0602 11:11:19.436612 6 log.go:172] (0xc000cb8a50) Data frame received for 5 I0602 11:11:19.436644 6 log.go:172] (0xc0015c9360) (5) Data frame handling I0602 11:11:19.436665 6 log.go:172] (0xc000cb8a50) Data frame received for 3 I0602 11:11:19.436682 6 log.go:172] (0xc0015c92c0) (3) Data frame handling I0602 11:11:19.436691 6 log.go:172] (0xc0015c92c0) (3) Data frame sent I0602 11:11:19.436709 6 log.go:172] (0xc000cb8a50) Data frame received for 3 I0602 11:11:19.436715 6 log.go:172] (0xc0015c92c0) (3) Data frame handling I0602 11:11:19.438314 6 log.go:172] (0xc000cb8a50) Data frame received for 1 I0602 11:11:19.438353 6 log.go:172] (0xc0015c9220) (1) Data frame handling I0602 11:11:19.438371 6 log.go:172] (0xc0015c9220) (1) Data frame sent I0602 11:11:19.438388 6 log.go:172] (0xc000cb8a50) (0xc0015c9220) Stream removed, broadcasting: 1 I0602 11:11:19.438405 6 log.go:172] (0xc000cb8a50) Go away received I0602 11:11:19.438530 6 log.go:172] (0xc000cb8a50) (0xc0015c9220) Stream removed, broadcasting: 1 I0602 11:11:19.438546 6 log.go:172] (0xc000cb8a50) (0xc0015c92c0) Stream removed, broadcasting: 3 I0602 11:11:19.438553 6 log.go:172] (0xc000cb8a50) (0xc0015c9360) Stream removed, broadcasting: 5 Jun 2 11:11:19.438: INFO: Exec stderr: "" Jun 2 11:11:19.438: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.438: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.465714 6 log.go:172] (0xc00087dce0) (0xc0010eb9a0) Create stream I0602 11:11:19.465739 6 log.go:172] (0xc00087dce0) (0xc0010eb9a0) Stream added, broadcasting: 1 I0602 11:11:19.467864 6 log.go:172] (0xc00087dce0) Reply frame received for 1 I0602 11:11:19.467911 6 log.go:172] (0xc00087dce0) (0xc00069ed20) Create stream I0602 11:11:19.467933 6 log.go:172] (0xc00087dce0) (0xc00069ed20) Stream added, broadcasting: 3 I0602 11:11:19.469002 6 log.go:172] (0xc00087dce0) Reply frame received for 3 I0602 11:11:19.469050 6 log.go:172] (0xc00087dce0) (0xc000ca0000) Create stream I0602 11:11:19.469070 6 log.go:172] (0xc00087dce0) (0xc000ca0000) Stream added, broadcasting: 5 I0602 11:11:19.470577 6 log.go:172] (0xc00087dce0) Reply frame received for 5 I0602 11:11:19.536525 6 log.go:172] (0xc00087dce0) Data frame received for 5 I0602 11:11:19.536561 6 log.go:172] (0xc000ca0000) (5) Data frame handling I0602 11:11:19.536582 6 log.go:172] (0xc00087dce0) Data frame received for 3 I0602 11:11:19.536591 6 log.go:172] (0xc00069ed20) (3) Data frame handling I0602 11:11:19.536602 6 log.go:172] (0xc00069ed20) (3) Data frame sent I0602 11:11:19.536611 6 log.go:172] (0xc00087dce0) Data frame received for 3 I0602 11:11:19.536618 6 log.go:172] (0xc00069ed20) (3) Data frame handling I0602 11:11:19.537747 6 log.go:172] (0xc00087dce0) Data frame received for 1 I0602 11:11:19.537766 6 log.go:172] (0xc0010eb9a0) (1) Data frame handling I0602 11:11:19.537780 6 log.go:172] (0xc0010eb9a0) (1) Data frame sent I0602 11:11:19.537794 6 log.go:172] (0xc00087dce0) (0xc0010eb9a0) Stream removed, broadcasting: 1 I0602 11:11:19.537838 6 log.go:172] (0xc00087dce0) Go away received I0602 11:11:19.537885 6 log.go:172] (0xc00087dce0) (0xc0010eb9a0) Stream removed, broadcasting: 1 I0602 11:11:19.537900 6 log.go:172] (0xc00087dce0) (0xc00069ed20) Stream removed, broadcasting: 3 I0602 11:11:19.537913 6 log.go:172] (0xc00087dce0) (0xc000ca0000) Stream removed, broadcasting: 5 Jun 2 11:11:19.537: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 2 11:11:19.537: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.538: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.571143 6 log.go:172] (0xc000c8bad0) (0xc00069f220) Create stream I0602 11:11:19.571170 6 log.go:172] (0xc000c8bad0) (0xc00069f220) Stream added, broadcasting: 1 I0602 11:11:19.573749 6 log.go:172] (0xc000c8bad0) Reply frame received for 1 I0602 11:11:19.573791 6 log.go:172] (0xc000c8bad0) (0xc0015c9400) Create stream I0602 11:11:19.573805 6 log.go:172] (0xc000c8bad0) (0xc0015c9400) Stream added, broadcasting: 3 I0602 11:11:19.574860 6 log.go:172] (0xc000c8bad0) Reply frame received for 3 I0602 11:11:19.574903 6 log.go:172] (0xc000c8bad0) (0xc0010eba40) Create stream I0602 11:11:19.574925 6 log.go:172] (0xc000c8bad0) (0xc0010eba40) Stream added, broadcasting: 5 I0602 11:11:19.575868 6 log.go:172] (0xc000c8bad0) Reply frame received for 5 I0602 11:11:19.626645 6 log.go:172] (0xc000c8bad0) Data frame received for 5 I0602 11:11:19.626674 6 log.go:172] (0xc0010eba40) (5) Data frame handling I0602 11:11:19.626695 6 log.go:172] (0xc000c8bad0) Data frame received for 3 I0602 11:11:19.626708 6 log.go:172] (0xc0015c9400) (3) Data frame handling I0602 11:11:19.626719 6 log.go:172] (0xc0015c9400) (3) Data frame sent I0602 11:11:19.626727 6 log.go:172] (0xc000c8bad0) Data frame received for 3 I0602 11:11:19.626735 6 log.go:172] (0xc0015c9400) (3) Data frame handling I0602 11:11:19.628283 6 log.go:172] (0xc000c8bad0) Data frame received for 1 I0602 11:11:19.628309 6 log.go:172] (0xc00069f220) (1) Data frame handling I0602 11:11:19.628327 6 log.go:172] (0xc00069f220) (1) Data frame sent I0602 11:11:19.628609 6 log.go:172] (0xc000c8bad0) (0xc00069f220) Stream removed, broadcasting: 1 I0602 11:11:19.628676 6 log.go:172] (0xc000c8bad0) Go away received I0602 11:11:19.628716 6 log.go:172] (0xc000c8bad0) (0xc00069f220) Stream removed, broadcasting: 1 I0602 11:11:19.628737 6 log.go:172] (0xc000c8bad0) (0xc0015c9400) Stream removed, broadcasting: 3 I0602 11:11:19.628751 6 log.go:172] (0xc000c8bad0) (0xc0010eba40) Stream removed, broadcasting: 5 Jun 2 11:11:19.628: INFO: Exec stderr: "" Jun 2 11:11:19.628: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.628: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.655796 6 log.go:172] (0xc000cb8f20) (0xc0015c9860) Create stream I0602 11:11:19.655824 6 log.go:172] (0xc000cb8f20) (0xc0015c9860) Stream added, broadcasting: 1 I0602 11:11:19.658880 6 log.go:172] (0xc000cb8f20) Reply frame received for 1 I0602 11:11:19.658932 6 log.go:172] (0xc000cb8f20) (0xc001b85400) Create stream I0602 11:11:19.658946 6 log.go:172] (0xc000cb8f20) (0xc001b85400) Stream added, broadcasting: 3 I0602 11:11:19.659859 6 log.go:172] (0xc000cb8f20) Reply frame received for 3 I0602 11:11:19.659903 6 log.go:172] (0xc000cb8f20) (0xc0010ebae0) Create stream I0602 11:11:19.659913 6 log.go:172] (0xc000cb8f20) (0xc0010ebae0) Stream added, broadcasting: 5 I0602 11:11:19.660669 6 log.go:172] (0xc000cb8f20) Reply frame received for 5 I0602 11:11:19.742528 6 log.go:172] (0xc000cb8f20) Data frame received for 5 I0602 11:11:19.742557 6 log.go:172] (0xc0010ebae0) (5) Data frame handling I0602 11:11:19.742574 6 log.go:172] (0xc000cb8f20) Data frame received for 3 I0602 11:11:19.742599 6 log.go:172] (0xc001b85400) (3) Data frame handling I0602 11:11:19.742614 6 log.go:172] (0xc001b85400) (3) Data frame sent I0602 11:11:19.742623 6 log.go:172] (0xc000cb8f20) Data frame received for 3 I0602 11:11:19.742633 6 log.go:172] (0xc001b85400) (3) Data frame handling I0602 11:11:19.744266 6 log.go:172] (0xc000cb8f20) Data frame received for 1 I0602 11:11:19.744279 6 log.go:172] (0xc0015c9860) (1) Data frame handling I0602 11:11:19.744287 6 log.go:172] (0xc0015c9860) (1) Data frame sent I0602 11:11:19.744486 6 log.go:172] (0xc000cb8f20) (0xc0015c9860) Stream removed, broadcasting: 1 I0602 11:11:19.744563 6 log.go:172] (0xc000cb8f20) Go away received I0602 11:11:19.744639 6 log.go:172] (0xc000cb8f20) (0xc0015c9860) Stream removed, broadcasting: 1 I0602 11:11:19.744667 6 log.go:172] (0xc000cb8f20) (0xc001b85400) Stream removed, broadcasting: 3 I0602 11:11:19.744687 6 log.go:172] (0xc000cb8f20) (0xc0010ebae0) Stream removed, broadcasting: 5 Jun 2 11:11:19.744: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 2 11:11:19.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.744: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.820511 6 log.go:172] (0xc000cb93f0) (0xc0015c9ae0) Create stream I0602 11:11:19.820548 6 log.go:172] (0xc000cb93f0) (0xc0015c9ae0) Stream added, broadcasting: 1 I0602 11:11:19.823852 6 log.go:172] (0xc000cb93f0) Reply frame received for 1 I0602 11:11:19.823896 6 log.go:172] (0xc000cb93f0) (0xc0010ebb80) Create stream I0602 11:11:19.823913 6 log.go:172] (0xc000cb93f0) (0xc0010ebb80) Stream added, broadcasting: 3 I0602 11:11:19.824927 6 log.go:172] (0xc000cb93f0) Reply frame received for 3 I0602 11:11:19.824986 6 log.go:172] (0xc000cb93f0) (0xc00069f2c0) Create stream I0602 11:11:19.825007 6 log.go:172] (0xc000cb93f0) (0xc00069f2c0) Stream added, broadcasting: 5 I0602 11:11:19.826082 6 log.go:172] (0xc000cb93f0) Reply frame received for 5 I0602 11:11:19.901765 6 log.go:172] (0xc000cb93f0) Data frame received for 5 I0602 11:11:19.901792 6 log.go:172] (0xc00069f2c0) (5) Data frame handling I0602 11:11:19.901814 6 log.go:172] (0xc000cb93f0) Data frame received for 3 I0602 11:11:19.901830 6 log.go:172] (0xc0010ebb80) (3) Data frame handling I0602 11:11:19.901843 6 log.go:172] (0xc0010ebb80) (3) Data frame sent I0602 11:11:19.901849 6 log.go:172] (0xc000cb93f0) Data frame received for 3 I0602 11:11:19.901856 6 log.go:172] (0xc0010ebb80) (3) Data frame handling I0602 11:11:19.902888 6 log.go:172] (0xc000cb93f0) Data frame received for 1 I0602 11:11:19.902899 6 log.go:172] (0xc0015c9ae0) (1) Data frame handling I0602 11:11:19.902905 6 log.go:172] (0xc0015c9ae0) (1) Data frame sent I0602 11:11:19.902913 6 log.go:172] (0xc000cb93f0) (0xc0015c9ae0) Stream removed, broadcasting: 1 I0602 11:11:19.902923 6 log.go:172] (0xc000cb93f0) Go away received I0602 11:11:19.902996 6 log.go:172] (0xc000cb93f0) (0xc0015c9ae0) Stream removed, broadcasting: 1 I0602 11:11:19.903012 6 log.go:172] (0xc000cb93f0) (0xc0010ebb80) Stream removed, broadcasting: 3 I0602 11:11:19.903020 6 log.go:172] (0xc000cb93f0) (0xc00069f2c0) Stream removed, broadcasting: 5 Jun 2 11:11:19.903: INFO: Exec stderr: "" Jun 2 11:11:19.903: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.903: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:19.925382 6 log.go:172] (0xc000cb98c0) (0xc0015c9ea0) Create stream I0602 11:11:19.925414 6 log.go:172] (0xc000cb98c0) (0xc0015c9ea0) Stream added, broadcasting: 1 I0602 11:11:19.927346 6 log.go:172] (0xc000cb98c0) Reply frame received for 1 I0602 11:11:19.927378 6 log.go:172] (0xc000cb98c0) (0xc001b854a0) Create stream I0602 11:11:19.927389 6 log.go:172] (0xc000cb98c0) (0xc001b854a0) Stream added, broadcasting: 3 I0602 11:11:19.928368 6 log.go:172] (0xc000cb98c0) Reply frame received for 3 I0602 11:11:19.928417 6 log.go:172] (0xc000cb98c0) (0xc00069f400) Create stream I0602 11:11:19.928431 6 log.go:172] (0xc000cb98c0) (0xc00069f400) Stream added, broadcasting: 5 I0602 11:11:19.929446 6 log.go:172] (0xc000cb98c0) Reply frame received for 5 I0602 11:11:19.995752 6 log.go:172] (0xc000cb98c0) Data frame received for 5 I0602 11:11:19.995791 6 log.go:172] (0xc00069f400) (5) Data frame handling I0602 11:11:19.995816 6 log.go:172] (0xc000cb98c0) Data frame received for 3 I0602 11:11:19.995831 6 log.go:172] (0xc001b854a0) (3) Data frame handling I0602 11:11:19.995853 6 log.go:172] (0xc001b854a0) (3) Data frame sent I0602 11:11:19.995866 6 log.go:172] (0xc000cb98c0) Data frame received for 3 I0602 11:11:19.995877 6 log.go:172] (0xc001b854a0) (3) Data frame handling I0602 11:11:19.997535 6 log.go:172] (0xc000cb98c0) Data frame received for 1 I0602 11:11:19.997573 6 log.go:172] (0xc0015c9ea0) (1) Data frame handling I0602 11:11:19.997612 6 log.go:172] (0xc0015c9ea0) (1) Data frame sent I0602 11:11:19.997635 6 log.go:172] (0xc000cb98c0) (0xc0015c9ea0) Stream removed, broadcasting: 1 I0602 11:11:19.997653 6 log.go:172] (0xc000cb98c0) Go away received I0602 11:11:19.997820 6 log.go:172] (0xc000cb98c0) (0xc0015c9ea0) Stream removed, broadcasting: 1 I0602 11:11:19.997850 6 log.go:172] (0xc000cb98c0) (0xc001b854a0) Stream removed, broadcasting: 3 I0602 11:11:19.997863 6 log.go:172] (0xc000cb98c0) (0xc00069f400) Stream removed, broadcasting: 5 Jun 2 11:11:19.997: INFO: Exec stderr: "" Jun 2 11:11:19.997: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:19.997: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:20.034410 6 log.go:172] (0xc001d162c0) (0xc000ca0780) Create stream I0602 11:11:20.034440 6 log.go:172] (0xc001d162c0) (0xc000ca0780) Stream added, broadcasting: 1 I0602 11:11:20.036938 6 log.go:172] (0xc001d162c0) Reply frame received for 1 I0602 11:11:20.036964 6 log.go:172] (0xc001d162c0) (0xc000ca08c0) Create stream I0602 11:11:20.036977 6 log.go:172] (0xc001d162c0) (0xc000ca08c0) Stream added, broadcasting: 3 I0602 11:11:20.038059 6 log.go:172] (0xc001d162c0) Reply frame received for 3 I0602 11:11:20.038088 6 log.go:172] (0xc001d162c0) (0xc0010ebc20) Create stream I0602 11:11:20.038099 6 log.go:172] (0xc001d162c0) (0xc0010ebc20) Stream added, broadcasting: 5 I0602 11:11:20.038982 6 log.go:172] (0xc001d162c0) Reply frame received for 5 I0602 11:11:20.109928 6 log.go:172] (0xc001d162c0) Data frame received for 5 I0602 11:11:20.109962 6 log.go:172] (0xc0010ebc20) (5) Data frame handling I0602 11:11:20.109982 6 log.go:172] (0xc001d162c0) Data frame received for 3 I0602 11:11:20.109990 6 log.go:172] (0xc000ca08c0) (3) Data frame handling I0602 11:11:20.110002 6 log.go:172] (0xc000ca08c0) (3) Data frame sent I0602 11:11:20.110013 6 log.go:172] (0xc001d162c0) Data frame received for 3 I0602 11:11:20.110020 6 log.go:172] (0xc000ca08c0) (3) Data frame handling I0602 11:11:20.111487 6 log.go:172] (0xc001d162c0) Data frame received for 1 I0602 11:11:20.111513 6 log.go:172] (0xc000ca0780) (1) Data frame handling I0602 11:11:20.111538 6 log.go:172] (0xc000ca0780) (1) Data frame sent I0602 11:11:20.111577 6 log.go:172] (0xc001d162c0) (0xc000ca0780) Stream removed, broadcasting: 1 I0602 11:11:20.111602 6 log.go:172] (0xc001d162c0) Go away received I0602 11:11:20.111823 6 log.go:172] (0xc001d162c0) (0xc000ca0780) Stream removed, broadcasting: 1 I0602 11:11:20.111854 6 log.go:172] (0xc001d162c0) (0xc000ca08c0) Stream removed, broadcasting: 3 I0602 11:11:20.111865 6 log.go:172] (0xc001d162c0) (0xc0010ebc20) Stream removed, broadcasting: 5 Jun 2 11:11:20.111: INFO: Exec stderr: "" Jun 2 11:11:20.111: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-c9gz4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:11:20.111: INFO: >>> kubeConfig: /root/.kube/config I0602 11:11:20.143131 6 log.go:172] (0xc00087d6b0) (0xc002068000) Create stream I0602 11:11:20.143164 6 log.go:172] (0xc00087d6b0) (0xc002068000) Stream added, broadcasting: 1 I0602 11:11:20.144935 6 log.go:172] (0xc00087d6b0) Reply frame received for 1 I0602 11:11:20.144980 6 log.go:172] (0xc00087d6b0) (0xc000ca61e0) Create stream I0602 11:11:20.144991 6 log.go:172] (0xc00087d6b0) (0xc000ca61e0) Stream added, broadcasting: 3 I0602 11:11:20.146034 6 log.go:172] (0xc00087d6b0) Reply frame received for 3 I0602 11:11:20.146071 6 log.go:172] (0xc00087d6b0) (0xc001edc000) Create stream I0602 11:11:20.146084 6 log.go:172] (0xc00087d6b0) (0xc001edc000) Stream added, broadcasting: 5 I0602 11:11:20.146805 6 log.go:172] (0xc00087d6b0) Reply frame received for 5 I0602 11:11:20.207625 6 log.go:172] (0xc00087d6b0) Data frame received for 3 I0602 11:11:20.207665 6 log.go:172] (0xc000ca61e0) (3) Data frame handling I0602 11:11:20.207681 6 log.go:172] (0xc000ca61e0) (3) Data frame sent I0602 11:11:20.207693 6 log.go:172] (0xc00087d6b0) Data frame received for 3 I0602 11:11:20.207702 6 log.go:172] (0xc000ca61e0) (3) Data frame handling I0602 11:11:20.207731 6 log.go:172] (0xc00087d6b0) Data frame received for 5 I0602 11:11:20.207757 6 log.go:172] (0xc001edc000) (5) Data frame handling I0602 11:11:20.209330 6 log.go:172] (0xc00087d6b0) Data frame received for 1 I0602 11:11:20.209351 6 log.go:172] (0xc002068000) (1) Data frame handling I0602 11:11:20.209368 6 log.go:172] (0xc002068000) (1) Data frame sent I0602 11:11:20.209378 6 log.go:172] (0xc00087d6b0) (0xc002068000) Stream removed, broadcasting: 1 I0602 11:11:20.209473 6 log.go:172] (0xc00087d6b0) (0xc002068000) Stream removed, broadcasting: 1 I0602 11:11:20.209502 6 log.go:172] (0xc00087d6b0) Go away received I0602 11:11:20.209544 6 log.go:172] (0xc00087d6b0) (0xc000ca61e0) Stream removed, broadcasting: 3 I0602 11:11:20.209575 6 log.go:172] (0xc00087d6b0) (0xc001edc000) Stream removed, broadcasting: 5 Jun 2 11:11:20.209: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:11:20.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-c9gz4" for this suite. Jun 2 11:12:06.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:12:06.342: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-c9gz4, resource: bindings, ignored listing per whitelist Jun 2 11:12:06.402: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-c9gz4 deletion completed in 46.144566848s • [SLOW TEST:57.577 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:12:06.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-e4fb68a5-a4c1-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 11:12:06.589: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-7ld5g" to be "success or failure" Jun 2 11:12:06.593: INFO: Pod "pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.60086ms Jun 2 11:12:08.598: INFO: Pod "pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008780746s Jun 2 11:12:10.616: INFO: Pod "pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027092199s STEP: Saw pod success Jun 2 11:12:10.616: INFO: Pod "pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:12:10.618: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 11:12:10.710: INFO: Waiting for pod pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018 to disappear Jun 2 11:12:10.761: INFO: Pod pod-projected-secrets-e4ffb3fd-a4c1-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:12:10.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7ld5g" for this suite. Jun 2 11:12:16.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:12:16.856: INFO: namespace: e2e-tests-projected-7ld5g, resource: bindings, ignored listing per whitelist Jun 2 11:12:16.860: INFO: namespace e2e-tests-projected-7ld5g deletion completed in 6.094520171s • [SLOW TEST:10.458 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:12:16.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 2 11:12:16.988: INFO: Waiting up to 5m0s for pod "downward-api-eb317698-a4c1-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-sjv9p" to be "success or failure" Jun 2 11:12:16.993: INFO: Pod "downward-api-eb317698-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.962848ms Jun 2 11:12:18.997: INFO: Pod "downward-api-eb317698-a4c1-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008969903s Jun 2 11:12:21.002: INFO: Pod "downward-api-eb317698-a4c1-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013695845s STEP: Saw pod success Jun 2 11:12:21.002: INFO: Pod "downward-api-eb317698-a4c1-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:12:21.006: INFO: Trying to get logs from node hunter-worker2 pod downward-api-eb317698-a4c1-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:12:21.040: INFO: Waiting for pod downward-api-eb317698-a4c1-11ea-889d-0242ac110018 to disappear Jun 2 11:12:21.046: INFO: Pod downward-api-eb317698-a4c1-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:12:21.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sjv9p" for this suite. Jun 2 11:12:27.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:12:27.106: INFO: namespace: e2e-tests-downward-api-sjv9p, resource: bindings, ignored listing per whitelist Jun 2 11:12:27.135: INFO: namespace e2e-tests-downward-api-sjv9p deletion completed in 6.084925545s • [SLOW TEST:10.274 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:12:27.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 2 11:12:27.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:30.007: INFO: stderr: "" Jun 2 11:12:30.007: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 2 11:12:30.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:30.141: INFO: stderr: "" Jun 2 11:12:30.141: INFO: stdout: "update-demo-nautilus-l4fmt update-demo-nautilus-ndn7c " Jun 2 11:12:30.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4fmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:30.237: INFO: stderr: "" Jun 2 11:12:30.237: INFO: stdout: "" Jun 2 11:12:30.237: INFO: update-demo-nautilus-l4fmt is created but not running Jun 2 11:12:35.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.343: INFO: stderr: "" Jun 2 11:12:35.343: INFO: stdout: "update-demo-nautilus-l4fmt update-demo-nautilus-ndn7c " Jun 2 11:12:35.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4fmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.439: INFO: stderr: "" Jun 2 11:12:35.439: INFO: stdout: "true" Jun 2 11:12:35.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l4fmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.541: INFO: stderr: "" Jun 2 11:12:35.541: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:12:35.541: INFO: validating pod update-demo-nautilus-l4fmt Jun 2 11:12:35.545: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:12:35.546: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:12:35.546: INFO: update-demo-nautilus-l4fmt is verified up and running Jun 2 11:12:35.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ndn7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.646: INFO: stderr: "" Jun 2 11:12:35.646: INFO: stdout: "true" Jun 2 11:12:35.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ndn7c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.742: INFO: stderr: "" Jun 2 11:12:35.742: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:12:35.742: INFO: validating pod update-demo-nautilus-ndn7c Jun 2 11:12:35.746: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:12:35.746: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:12:35.746: INFO: update-demo-nautilus-ndn7c is verified up and running STEP: using delete to clean up resources Jun 2 11:12:35.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.863: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 11:12:35.863: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 2 11:12:35.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wqlxq' Jun 2 11:12:35.978: INFO: stderr: "No resources found.\n" Jun 2 11:12:35.978: INFO: stdout: "" Jun 2 11:12:35.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wqlxq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 2 11:12:36.076: INFO: stderr: "" Jun 2 11:12:36.076: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:12:36.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wqlxq" for this suite. Jun 2 11:12:58.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:12:58.123: INFO: namespace: e2e-tests-kubectl-wqlxq, resource: bindings, ignored listing per whitelist Jun 2 11:12:58.196: INFO: namespace e2e-tests-kubectl-wqlxq deletion completed in 22.112770136s • [SLOW TEST:31.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:12:58.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:12:58.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-d9xrl" for this suite. Jun 2 11:13:20.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:13:20.464: INFO: namespace: e2e-tests-pods-d9xrl, resource: bindings, ignored listing per whitelist Jun 2 11:13:20.511: INFO: namespace e2e-tests-pods-d9xrl deletion completed in 22.136811205s • [SLOW TEST:22.315 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:13:20.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 2 11:13:25.172: INFO: Successfully updated pod "labelsupdate11235c4c-a4c2-11ea-889d-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:13:27.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t9pjz" for this suite. Jun 2 11:13:49.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:13:49.324: INFO: namespace: e2e-tests-downward-api-t9pjz, resource: bindings, ignored listing per whitelist Jun 2 11:13:49.375: INFO: namespace e2e-tests-downward-api-t9pjz deletion completed in 22.096943528s • [SLOW TEST:28.864 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:13:49.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 2 11:13:54.027: INFO: Successfully updated pod "pod-update-activedeadlineseconds-22531733-a4c2-11ea-889d-0242ac110018" Jun 2 11:13:54.027: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-22531733-a4c2-11ea-889d-0242ac110018" in namespace "e2e-tests-pods-m824h" to be "terminated due to deadline exceeded" Jun 2 11:13:54.048: INFO: Pod "pod-update-activedeadlineseconds-22531733-a4c2-11ea-889d-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 20.498935ms Jun 2 11:13:56.052: INFO: Pod "pod-update-activedeadlineseconds-22531733-a4c2-11ea-889d-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02477668s Jun 2 11:13:56.052: INFO: Pod "pod-update-activedeadlineseconds-22531733-a4c2-11ea-889d-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:13:56.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-m824h" for this suite. Jun 2 11:14:02.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:14:02.099: INFO: namespace: e2e-tests-pods-m824h, resource: bindings, ignored listing per whitelist Jun 2 11:14:02.139: INFO: namespace e2e-tests-pods-m824h deletion completed in 6.082707132s • [SLOW TEST:12.763 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:14:02.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-wqbnl STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 2 11:14:02.240: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 2 11:14:26.439: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.162:8080/dial?request=hostName&protocol=udp&host=10.244.1.161&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wqbnl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:14:26.439: INFO: >>> kubeConfig: /root/.kube/config I0602 11:14:26.473707 6 log.go:172] (0xc001a7c2c0) (0xc001c57f40) Create stream I0602 11:14:26.473735 6 log.go:172] (0xc001a7c2c0) (0xc001c57f40) Stream added, broadcasting: 1 I0602 11:14:26.475265 6 log.go:172] (0xc001a7c2c0) Reply frame received for 1 I0602 11:14:26.475288 6 log.go:172] (0xc001a7c2c0) (0xc001a48a00) Create stream I0602 11:14:26.475297 6 log.go:172] (0xc001a7c2c0) (0xc001a48a00) Stream added, broadcasting: 3 I0602 11:14:26.476140 6 log.go:172] (0xc001a7c2c0) Reply frame received for 3 I0602 11:14:26.476187 6 log.go:172] (0xc001a7c2c0) (0xc002006000) Create stream I0602 11:14:26.476198 6 log.go:172] (0xc001a7c2c0) (0xc002006000) Stream added, broadcasting: 5 I0602 11:14:26.476920 6 log.go:172] (0xc001a7c2c0) Reply frame received for 5 I0602 11:14:26.585018 6 log.go:172] (0xc001a7c2c0) Data frame received for 3 I0602 11:14:26.585049 6 log.go:172] (0xc001a48a00) (3) Data frame handling I0602 11:14:26.585393 6 log.go:172] (0xc001a48a00) (3) Data frame sent I0602 11:14:26.586112 6 log.go:172] (0xc001a7c2c0) Data frame received for 5 I0602 11:14:26.586162 6 log.go:172] (0xc002006000) (5) Data frame handling I0602 11:14:26.586214 6 log.go:172] (0xc001a7c2c0) Data frame received for 3 I0602 11:14:26.586240 6 log.go:172] (0xc001a48a00) (3) Data frame handling I0602 11:14:26.588984 6 log.go:172] (0xc001a7c2c0) Data frame received for 1 I0602 11:14:26.589014 6 log.go:172] (0xc001c57f40) (1) Data frame handling I0602 11:14:26.589033 6 log.go:172] (0xc001c57f40) (1) Data frame sent I0602 11:14:26.589069 6 log.go:172] (0xc001a7c2c0) (0xc001c57f40) Stream removed, broadcasting: 1 I0602 11:14:26.589302 6 log.go:172] (0xc001a7c2c0) Go away received I0602 11:14:26.589386 6 log.go:172] (0xc001a7c2c0) (0xc001c57f40) Stream removed, broadcasting: 1 I0602 11:14:26.589412 6 log.go:172] (0xc001a7c2c0) (0xc001a48a00) Stream removed, broadcasting: 3 I0602 11:14:26.589429 6 log.go:172] (0xc001a7c2c0) (0xc002006000) Stream removed, broadcasting: 5 Jun 2 11:14:26.589: INFO: Waiting for endpoints: map[] Jun 2 11:14:26.592: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.162:8080/dial?request=hostName&protocol=udp&host=10.244.2.244&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-wqbnl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 11:14:26.592: INFO: >>> kubeConfig: /root/.kube/config I0602 11:14:26.619598 6 log.go:172] (0xc00087dce0) (0xc001ba0e60) Create stream I0602 11:14:26.619640 6 log.go:172] (0xc00087dce0) (0xc001ba0e60) Stream added, broadcasting: 1 I0602 11:14:26.620998 6 log.go:172] (0xc00087dce0) Reply frame received for 1 I0602 11:14:26.621027 6 log.go:172] (0xc00087dce0) (0xc0013408c0) Create stream I0602 11:14:26.621036 6 log.go:172] (0xc00087dce0) (0xc0013408c0) Stream added, broadcasting: 3 I0602 11:14:26.621979 6 log.go:172] (0xc00087dce0) Reply frame received for 3 I0602 11:14:26.622006 6 log.go:172] (0xc00087dce0) (0xc001340960) Create stream I0602 11:14:26.622015 6 log.go:172] (0xc00087dce0) (0xc001340960) Stream added, broadcasting: 5 I0602 11:14:26.622671 6 log.go:172] (0xc00087dce0) Reply frame received for 5 I0602 11:14:26.689004 6 log.go:172] (0xc00087dce0) Data frame received for 3 I0602 11:14:26.689040 6 log.go:172] (0xc0013408c0) (3) Data frame handling I0602 11:14:26.689080 6 log.go:172] (0xc0013408c0) (3) Data frame sent I0602 11:14:26.689443 6 log.go:172] (0xc00087dce0) Data frame received for 3 I0602 11:14:26.689471 6 log.go:172] (0xc0013408c0) (3) Data frame handling I0602 11:14:26.689508 6 log.go:172] (0xc00087dce0) Data frame received for 5 I0602 11:14:26.689535 6 log.go:172] (0xc001340960) (5) Data frame handling I0602 11:14:26.691300 6 log.go:172] (0xc00087dce0) Data frame received for 1 I0602 11:14:26.691321 6 log.go:172] (0xc001ba0e60) (1) Data frame handling I0602 11:14:26.691334 6 log.go:172] (0xc001ba0e60) (1) Data frame sent I0602 11:14:26.691353 6 log.go:172] (0xc00087dce0) (0xc001ba0e60) Stream removed, broadcasting: 1 I0602 11:14:26.691451 6 log.go:172] (0xc00087dce0) Go away received I0602 11:14:26.691505 6 log.go:172] (0xc00087dce0) (0xc001ba0e60) Stream removed, broadcasting: 1 I0602 11:14:26.691545 6 log.go:172] (0xc00087dce0) (0xc0013408c0) Stream removed, broadcasting: 3 I0602 11:14:26.691580 6 log.go:172] (0xc00087dce0) (0xc001340960) Stream removed, broadcasting: 5 Jun 2 11:14:26.691: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:14:26.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-wqbnl" for this suite. Jun 2 11:14:48.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:14:48.791: INFO: namespace: e2e-tests-pod-network-test-wqbnl, resource: bindings, ignored listing per whitelist Jun 2 11:14:48.815: INFO: namespace e2e-tests-pod-network-test-wqbnl deletion completed in 22.119710564s • [SLOW TEST:46.675 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:14:48.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jun 2 11:14:48.934: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:14:49.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kbtdg" for this suite. Jun 2 11:14:55.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:14:55.118: INFO: namespace: e2e-tests-kubectl-kbtdg, resource: bindings, ignored listing per whitelist Jun 2 11:14:55.153: INFO: namespace e2e-tests-kubectl-kbtdg deletion completed in 6.103650627s • [SLOW TEST:6.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:14:55.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:14:55.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jun 2 11:14:55.335: INFO: stderr: "" Jun 2 11:14:55.335: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 2 11:14:55.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dg998' Jun 2 11:14:55.596: INFO: stderr: "" Jun 2 11:14:55.596: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 2 11:14:55.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dg998' Jun 2 11:14:55.931: INFO: stderr: "" Jun 2 11:14:55.931: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 2 11:14:57.046: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:14:57.046: INFO: Found 0 / 1 Jun 2 11:14:57.936: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:14:57.936: INFO: Found 0 / 1 Jun 2 11:14:58.936: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:14:58.936: INFO: Found 1 / 1 Jun 2 11:14:58.936: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 2 11:14:58.940: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:14:58.940: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 2 11:14:58.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-jbwrn --namespace=e2e-tests-kubectl-dg998' Jun 2 11:14:59.081: INFO: stderr: "" Jun 2 11:14:59.081: INFO: stdout: "Name: redis-master-jbwrn\nNamespace: e2e-tests-kubectl-dg998\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Tue, 02 Jun 2020 11:14:55 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.163\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://f3c28b19291f723bd5144404e0bed2600f11699e556e8b2da996cb616959f4bd\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 02 Jun 2020 11:14:58 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-r97tw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-r97tw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-r97tw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-tests-kubectl-dg998/redis-master-jbwrn to hunter-worker\n Normal Pulled 3s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Jun 2 11:14:59.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-dg998' Jun 2 11:14:59.206: INFO: stderr: "" Jun 2 11:14:59.206: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-dg998\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-jbwrn\n" Jun 2 11:14:59.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-dg998' Jun 2 11:14:59.311: INFO: stderr: "" Jun 2 11:14:59.311: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-dg998\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.97.125.200\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.163:6379\nSession Affinity: None\nEvents: \n" Jun 2 11:14:59.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jun 2 11:14:59.454: INFO: stderr: "" Jun 2 11:14:59.454: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 02 Jun 2020 11:14:52 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 02 Jun 2020 11:14:52 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 02 Jun 2020 11:14:52 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 02 Jun 2020 11:14:52 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 78d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 78d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 78d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 78d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 78d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 78d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 78d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 2 11:14:59.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-dg998' Jun 2 11:14:59.574: INFO: stderr: "" Jun 2 11:14:59.574: INFO: stdout: "Name: e2e-tests-kubectl-dg998\nLabels: e2e-framework=kubectl\n e2e-run=5a6787e5-a4be-11ea-889d-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:14:59.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dg998" for this suite. Jun 2 11:15:21.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:15:21.672: INFO: namespace: e2e-tests-kubectl-dg998, resource: bindings, ignored listing per whitelist Jun 2 11:15:21.691: INFO: namespace e2e-tests-kubectl-dg998 deletion completed in 22.11322579s • [SLOW TEST:26.537 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:15:21.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jun 2 11:15:22.393: INFO: Waiting up to 5m0s for pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t" in namespace "e2e-tests-svcaccounts-cq98h" to be "success or failure" Jun 2 11:15:22.433: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 40.156845ms Jun 2 11:15:24.438: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044307423s Jun 2 11:15:26.441: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047843946s Jun 2 11:15:28.446: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052366642s Jun 2 11:15:30.450: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056671702s STEP: Saw pod success Jun 2 11:15:30.450: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t" satisfied condition "success or failure" Jun 2 11:15:30.453: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t container token-test: STEP: delete the pod Jun 2 11:15:30.494: INFO: Waiting for pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t to disappear Jun 2 11:15:30.505: INFO: Pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-npm5t no longer exists STEP: Creating a pod to test consume service account root CA Jun 2 11:15:30.509: INFO: Waiting up to 5m0s for pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm" in namespace "e2e-tests-svcaccounts-cq98h" to be "success or failure" Jun 2 11:15:30.578: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm": Phase="Pending", Reason="", readiness=false. Elapsed: 68.632732ms Jun 2 11:15:32.582: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072855475s Jun 2 11:15:34.586: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077182712s Jun 2 11:15:36.590: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080695463s STEP: Saw pod success Jun 2 11:15:36.590: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm" satisfied condition "success or failure" Jun 2 11:15:36.592: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm container root-ca-test: STEP: delete the pod Jun 2 11:15:36.686: INFO: Waiting for pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm to disappear Jun 2 11:15:36.715: INFO: Pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-5vrbm no longer exists STEP: Creating a pod to test consume service account namespace Jun 2 11:15:36.719: INFO: Waiting up to 5m0s for pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6" in namespace "e2e-tests-svcaccounts-cq98h" to be "success or failure" Jun 2 11:15:36.734: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.519512ms Jun 2 11:15:38.811: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091613899s Jun 2 11:15:40.815: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096095152s Jun 2 11:15:42.819: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099672858s STEP: Saw pod success Jun 2 11:15:42.819: INFO: Pod "pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6" satisfied condition "success or failure" Jun 2 11:15:42.822: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6 container namespace-test: STEP: delete the pod Jun 2 11:15:42.840: INFO: Waiting for pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6 to disappear Jun 2 11:15:42.893: INFO: Pod pod-service-account-59b4f893-a4c2-11ea-889d-0242ac110018-kj7z6 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:15:42.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-cq98h" for this suite. Jun 2 11:15:48.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:15:49.019: INFO: namespace: e2e-tests-svcaccounts-cq98h, resource: bindings, ignored listing per whitelist Jun 2 11:15:49.022: INFO: namespace e2e-tests-svcaccounts-cq98h deletion completed in 6.124637936s • [SLOW TEST:27.331 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:15:49.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 2 11:15:49.168: INFO: Waiting up to 5m0s for pod "pod-69a98038-a4c2-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-qcxwt" to be "success or failure" Jun 2 11:15:49.181: INFO: Pod "pod-69a98038-a4c2-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.254982ms Jun 2 11:15:51.188: INFO: Pod "pod-69a98038-a4c2-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020041537s Jun 2 11:15:53.284: INFO: Pod "pod-69a98038-a4c2-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115633583s STEP: Saw pod success Jun 2 11:15:53.284: INFO: Pod "pod-69a98038-a4c2-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:15:53.287: INFO: Trying to get logs from node hunter-worker pod pod-69a98038-a4c2-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:15:53.337: INFO: Waiting for pod pod-69a98038-a4c2-11ea-889d-0242ac110018 to disappear Jun 2 11:15:53.350: INFO: Pod pod-69a98038-a4c2-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:15:53.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qcxwt" for this suite. Jun 2 11:15:59.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:15:59.446: INFO: namespace: e2e-tests-emptydir-qcxwt, resource: bindings, ignored listing per whitelist Jun 2 11:15:59.452: INFO: namespace e2e-tests-emptydir-qcxwt deletion completed in 6.099295345s • [SLOW TEST:10.430 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:15:59.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:15:59.604: INFO: Creating deployment "nginx-deployment" Jun 2 11:15:59.620: INFO: Waiting for observed generation 1 Jun 2 11:16:01.769: INFO: Waiting for all required pods to come up Jun 2 11:16:01.773: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 2 11:16:11.978: INFO: Waiting for deployment "nginx-deployment" to complete Jun 2 11:16:11.984: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 2 11:16:11.990: INFO: Updating deployment nginx-deployment Jun 2 11:16:11.990: INFO: Waiting for observed generation 2 Jun 2 11:16:14.002: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 2 11:16:14.005: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 2 11:16:14.008: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 2 11:16:14.016: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 2 11:16:14.016: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 2 11:16:14.018: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 2 11:16:14.023: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 2 11:16:14.023: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 2 11:16:14.195: INFO: Updating deployment nginx-deployment Jun 2 11:16:14.195: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 2 11:16:14.292: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 2 11:16:14.653: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 2 11:16:15.160: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zwnxw/deployments/nginx-deployment,UID:6fe44787-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821732,Generation:3,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-02 11:16:12 +0000 UTC 2020-06-02 11:15:59 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-06-02 11:16:14 +0000 UTC 2020-06-02 11:16:14 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 2 11:16:15.202: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zwnxw/replicasets/nginx-deployment-5c98f8fb5,UID:77462563-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821750,Generation:3,CreationTimestamp:2020-06-02 11:16:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6fe44787-a4c2-11ea-99e8-0242ac110002 0xc002469507 0xc002469508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 2 11:16:15.202: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 2 11:16:15.202: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zwnxw/replicasets/nginx-deployment-85ddf47c5d,UID:6fed3a4b-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821740,Generation:3,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6fe44787-a4c2-11ea-99e8-0242ac110002 0xc0024695c7 0xc0024695c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 2 11:16:15.301: INFO: Pod "nginx-deployment-5c98f8fb5-75qvw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-75qvw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-75qvw,UID:78f233c7-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821737,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e147 0xc00214e148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.302: INFO: Pod "nginx-deployment-5c98f8fb5-8kdnm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8kdnm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-8kdnm,UID:776f5cb7-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821681,Generation:0,CreationTimestamp:2020-06-02 11:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e2d7 0xc00214e2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e350} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-02 11:16:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.302: INFO: Pod "nginx-deployment-5c98f8fb5-blzz8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-blzz8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-blzz8,UID:774eaf3f-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821661,Generation:0,CreationTimestamp:2020-06-02 11:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e437 0xc00214e438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e4b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-02 11:16:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.302: INFO: Pod "nginx-deployment-5c98f8fb5-dcq4z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dcq4z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-dcq4z,UID:774763c3-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821649,Generation:0,CreationTimestamp:2020-06-02 11:16:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e597 0xc00214e598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e610} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-02 11:16:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.302: INFO: Pod "nginx-deployment-5c98f8fb5-htt5d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-htt5d,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-htt5d,UID:78a56252-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821706,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e6f7 0xc00214e6f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e770} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.302: INFO: Pod "nginx-deployment-5c98f8fb5-kgrb8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kgrb8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-kgrb8,UID:78f22658-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821734,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e807 0xc00214e808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e880} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.302: INFO: Pod "nginx-deployment-5c98f8fb5-lh4mt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lh4mt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-lh4mt,UID:78f23be0-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821739,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214e917 0xc00214e918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214e990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214e9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.303: INFO: Pod "nginx-deployment-5c98f8fb5-mbr2z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mbr2z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-mbr2z,UID:78dd4052-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821722,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214ea27 0xc00214ea28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214eaa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214eac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.303: INFO: Pod "nginx-deployment-5c98f8fb5-qfnw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qfnw8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-qfnw8,UID:78f28fa4-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821742,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214eb47 0xc00214eb48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214ebd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214ebf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.303: INFO: Pod "nginx-deployment-5c98f8fb5-vs4lb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vs4lb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-vs4lb,UID:774e99ca-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821660,Generation:0,CreationTimestamp:2020-06-02 11:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214ed07 0xc00214ed08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214eda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214edc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-02 11:16:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.303: INFO: Pod "nginx-deployment-5c98f8fb5-wcb2j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wcb2j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-wcb2j,UID:78dd3bef-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821727,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214eef7 0xc00214eef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214ef80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214efb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.303: INFO: Pod "nginx-deployment-5c98f8fb5-wkrbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wkrbj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-wkrbj,UID:790f18f2-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821747,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214f067 0xc00214f068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f0e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:15 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.304: INFO: Pod "nginx-deployment-5c98f8fb5-zmq47" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zmq47,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-5c98f8fb5-zmq47,UID:7778d269-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821684,Generation:0,CreationTimestamp:2020-06-02 11:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 77462563-a4c2-11ea-99e8-0242ac110002 0xc00214f177 0xc00214f178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f1f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-02 11:16:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.304: INFO: Pod "nginx-deployment-85ddf47c5d-2dzqr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2dzqr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-2dzqr,UID:6feffe85-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821605,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214f2e7 0xc00214f2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f360} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.168,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2eb9284bfc3827522b3bd87516ffe471439c11ea10913e8d28a9ac9f21e7dfb5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.304: INFO: Pod "nginx-deployment-85ddf47c5d-2jqzj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2jqzj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-2jqzj,UID:6ff62b64-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821592,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214f447 0xc00214f448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.250,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4b14872b3a31668291be9eec6d9a48d3cf6b924367abc9c1b5349aa40f1f94ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.304: INFO: Pod "nginx-deployment-85ddf47c5d-86trk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-86trk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-86trk,UID:78dd2b64-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821729,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214f5a7 0xc00214f5a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f620} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-9nvpn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9nvpn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-9nvpn,UID:78dd1b06-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821755,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214f6b7 0xc00214f6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f730} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-02 11:16:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-bs8n9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bs8n9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-bs8n9,UID:78f2385e-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821735,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214f807 0xc00214f808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f880} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-fd8vg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fd8vg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-fd8vg,UID:6fef8631-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821583,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214f917 0xc00214f918}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214f990} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214f9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.248,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b6348dfa4e2dc5c2bf768060734520d36328f0581bf952d9a15c7130c6c9c5e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-fslm9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fslm9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-fslm9,UID:78a533ea-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821756,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214fa77 0xc00214fa78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214faf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214fb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-02 11:16:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-hgwvd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hgwvd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-hgwvd,UID:78f2648a-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821736,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214fbc7 0xc00214fbc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214fc40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214fc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-mtgdz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mtgdz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-mtgdz,UID:6ff829e0-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821620,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214fcd7 0xc00214fcd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214fd50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214fd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.252,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fb881792dc76b63db0db6865f945c2a5c21312bc1c84d2f4f4d1ac1c7a1a09b4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.305: INFO: Pod "nginx-deployment-85ddf47c5d-n5bpv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n5bpv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-n5bpv,UID:78dd3b87-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821730,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214fe37 0xc00214fe38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214feb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214fed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-nlkmg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nlkmg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-nlkmg,UID:78f27f56-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821741,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc00214ff47 0xc00214ff48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00214ffc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00214ffe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-nxmmk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nxmmk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-nxmmk,UID:78f28333-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821744,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8067 0xc0009e8068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e8150} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e81b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-qghts" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qghts,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-qghts,UID:78dd1875-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821721,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8277 0xc0009e8278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e83a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e83c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-r6cmk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r6cmk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-r6cmk,UID:78f28c9a-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821743,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e84a7 0xc0009e84a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e85a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e85c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-r7k4c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r7k4c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-r7k4c,UID:6ff62c47-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821613,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8697 0xc0009e8698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e8750} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e8790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.165,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://27dff4016ed8d1f891cd474dbdf31f8af995b5b1f5a0fddfdecb4f7414bc9b31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-rkrjn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rkrjn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-rkrjn,UID:6ff00581-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821588,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8a77 0xc0009e8a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e8af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e8b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.249,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://52dd2c2f9031e8efff0eb3b601c44c7aa973f9e12e5f722438ed11bb693839a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-rtbhh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rtbhh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-rtbhh,UID:78976778-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821746,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8bd7 0xc0009e8bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e8c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e8c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-02 11:16:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.306: INFO: Pod "nginx-deployment-85ddf47c5d-tmc6r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tmc6r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-tmc6r,UID:6ff62584-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821609,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8e87 0xc0009e8e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e8f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e8f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.166,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a42472be73ff310955543ae856a1f4a6c9edc11ae2d0e4a102d4594e79e8d71d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.307: INFO: Pod "nginx-deployment-85ddf47c5d-trg26" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-trg26,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-trg26,UID:78a53bed-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821705,Generation:0,CreationTimestamp:2020-06-02 11:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e8fe7 0xc0009e8fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e9060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e9080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:14 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 2 11:16:15.307: INFO: Pod "nginx-deployment-85ddf47c5d-xczzs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xczzs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-zwnxw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zwnxw/pods/nginx-deployment-85ddf47c5d-xczzs,UID:6ff63e6e-a4c2-11ea-99e8-0242ac110002,ResourceVersion:13821606,Generation:0,CreationTimestamp:2020-06-02 11:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6fed3a4b-a4c2-11ea-99e8-0242ac110002 0xc0009e9327 0xc0009e9328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dt6mk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dt6mk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-dt6mk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009e93c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009e9450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:16:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:15:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.251,StartTime:2020-06-02 11:15:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:16:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4a7101b8c86bf43093db5633f92839e0c50039a814d6de8355c3109762a349fe}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:16:15.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-zwnxw" for this suite. Jun 2 11:16:39.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:16:39.559: INFO: namespace: e2e-tests-deployment-zwnxw, resource: bindings, ignored listing per whitelist Jun 2 11:16:39.615: INFO: namespace e2e-tests-deployment-zwnxw deletion completed in 24.238395421s • [SLOW TEST:40.162 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:16:39.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-p4vl STEP: Creating a pod to test atomic-volume-subpath Jun 2 11:16:39.790: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-p4vl" in namespace "e2e-tests-subpath-8qq4t" to be "success or failure" Jun 2 11:16:39.822: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Pending", Reason="", readiness=false. Elapsed: 31.81688ms Jun 2 11:16:41.826: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035069527s Jun 2 11:16:43.830: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039532238s Jun 2 11:16:45.834: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=true. Elapsed: 6.043558008s Jun 2 11:16:47.838: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 8.047972746s Jun 2 11:16:49.843: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 10.052657111s Jun 2 11:16:51.848: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 12.057217237s Jun 2 11:16:53.852: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 14.061257766s Jun 2 11:16:55.856: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 16.065801493s Jun 2 11:16:57.860: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 18.069660795s Jun 2 11:16:59.864: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 20.073010628s Jun 2 11:17:01.868: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 22.077101718s Jun 2 11:17:03.872: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Running", Reason="", readiness=false. Elapsed: 24.081618815s Jun 2 11:17:05.877: INFO: Pod "pod-subpath-test-downwardapi-p4vl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.0866629s STEP: Saw pod success Jun 2 11:17:05.877: INFO: Pod "pod-subpath-test-downwardapi-p4vl" satisfied condition "success or failure" Jun 2 11:17:05.880: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-p4vl container test-container-subpath-downwardapi-p4vl: STEP: delete the pod Jun 2 11:17:05.974: INFO: Waiting for pod pod-subpath-test-downwardapi-p4vl to disappear Jun 2 11:17:05.992: INFO: Pod pod-subpath-test-downwardapi-p4vl no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-p4vl Jun 2 11:17:05.992: INFO: Deleting pod "pod-subpath-test-downwardapi-p4vl" in namespace "e2e-tests-subpath-8qq4t" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:17:05.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8qq4t" for this suite. Jun 2 11:17:12.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:17:12.043: INFO: namespace: e2e-tests-subpath-8qq4t, resource: bindings, ignored listing per whitelist Jun 2 11:17:12.227: INFO: namespace e2e-tests-subpath-8qq4t deletion completed in 6.230169574s • [SLOW TEST:32.612 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:17:12.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:17:12.511: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jun 2 11:17:12.516: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cq4hd/daemonsets","resourceVersion":"13822111"},"items":null} Jun 2 11:17:12.519: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cq4hd/pods","resourceVersion":"13822111"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:17:12.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cq4hd" for this suite. Jun 2 11:17:18.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:17:18.547: INFO: namespace: e2e-tests-daemonsets-cq4hd, resource: bindings, ignored listing per whitelist Jun 2 11:17:18.639: INFO: namespace e2e-tests-daemonsets-cq4hd deletion completed in 6.11135325s S [SKIPPING] [6.411 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:17:12.511: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:17:18.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:18:18.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cmr5p" for this suite. Jun 2 11:18:40.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:18:40.809: INFO: namespace: e2e-tests-container-probe-cmr5p, resource: bindings, ignored listing per whitelist Jun 2 11:18:40.875: INFO: namespace e2e-tests-container-probe-cmr5p deletion completed in 22.096714786s • [SLOW TEST:82.236 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:18:40.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wspjp in namespace e2e-tests-proxy-tg6xh I0602 11:18:41.129540 6 runners.go:184] Created replication controller with name: proxy-service-wspjp, namespace: e2e-tests-proxy-tg6xh, replica count: 1 I0602 11:18:42.179944 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 11:18:43.180167 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 11:18:44.180428 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0602 11:18:45.180660 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0602 11:18:46.180896 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0602 11:18:47.181294 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0602 11:18:48.181531 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0602 11:18:49.181775 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0602 11:18:50.182004 6 runners.go:184] proxy-service-wspjp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 2 11:18:50.185: INFO: setup took 9.145819038s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 2 11:18:50.193: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-tg6xh/pods/http:proxy-service-wspjp-jvwnn:160/proxy/: foo (200; 7.796291ms) Jun 2 11:18:50.194: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-tg6xh/pods/http:proxy-service-wspjp-jvwnn:162/proxy/: bar (200; 8.088533ms) Jun 2 11:18:50.194: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-tg6xh/pods/proxy-service-wspjp-jvwnn/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-68z4c STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-68z4c to expose endpoints map[] Jun 2 11:19:07.554: INFO: Get endpoints failed (18.259406ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 2 11:19:08.558: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-68z4c exposes endpoints map[] (1.022569704s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-68z4c STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-68z4c to expose endpoints map[pod1:[80]] Jun 2 11:19:11.762: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-68z4c exposes endpoints map[pod1:[80]] (3.197160612s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-68z4c STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-68z4c to expose endpoints map[pod1:[80] pod2:[80]] Jun 2 11:19:14.855: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-68z4c exposes endpoints map[pod1:[80] pod2:[80]] (3.088979912s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-68z4c STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-68z4c to expose endpoints map[pod2:[80]] Jun 2 11:19:15.934: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-68z4c exposes endpoints map[pod2:[80]] (1.075507601s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-68z4c STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-68z4c to expose endpoints map[] Jun 2 11:19:16.961: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-68z4c exposes endpoints map[] (1.021865596s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:19:17.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-68z4c" for this suite. Jun 2 11:19:23.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:19:23.217: INFO: namespace: e2e-tests-services-68z4c, resource: bindings, ignored listing per whitelist Jun 2 11:19:23.299: INFO: namespace e2e-tests-services-68z4c deletion completed in 6.131865579s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:15.863 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:19:23.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:19:23.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-mgls2" to be "success or failure" Jun 2 11:19:23.459: INFO: Pod "downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.372058ms Jun 2 11:19:25.464: INFO: Pod "downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018890682s Jun 2 11:19:27.468: INFO: Pod "downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022777052s STEP: Saw pod success Jun 2 11:19:27.468: INFO: Pod "downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:19:27.471: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:19:27.551: INFO: Waiting for pod downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018 to disappear Jun 2 11:19:27.560: INFO: Pod downwardapi-volume-e95c1122-a4c2-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:19:27.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mgls2" for this suite. Jun 2 11:19:33.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:19:33.670: INFO: namespace: e2e-tests-downward-api-mgls2, resource: bindings, ignored listing per whitelist Jun 2 11:19:33.689: INFO: namespace e2e-tests-downward-api-mgls2 deletion completed in 6.126362337s • [SLOW TEST:10.390 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:19:33.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:19:37.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-727hj" for this suite. Jun 2 11:20:17.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:20:17.916: INFO: namespace: e2e-tests-kubelet-test-727hj, resource: bindings, ignored listing per whitelist Jun 2 11:20:17.988: INFO: namespace e2e-tests-kubelet-test-727hj deletion completed in 40.110539018s • [SLOW TEST:44.299 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:20:17.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-0a07381c-a4c3-11ea-889d-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-0a073881-a4c3-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0a07381c-a4c3-11ea-889d-0242ac110018 STEP: Updating configmap cm-test-opt-upd-0a073881-a4c3-11ea-889d-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-0a0738b2-a4c3-11ea-889d-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:21:40.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rklvl" for this suite. Jun 2 11:22:02.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:22:02.728: INFO: namespace: e2e-tests-projected-rklvl, resource: bindings, ignored listing per whitelist Jun 2 11:22:02.777: INFO: namespace e2e-tests-projected-rklvl deletion completed in 22.102515929s • [SLOW TEST:104.788 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:22:02.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-486a2790-a4c3-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:22:02.930: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-pbk52" to be "success or failure" Jun 2 11:22:02.942: INFO: Pod "pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.86123ms Jun 2 11:22:05.074: INFO: Pod "pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144129146s Jun 2 11:22:07.140: INFO: Pod "pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209895212s STEP: Saw pod success Jun 2 11:22:07.140: INFO: Pod "pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:22:07.144: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 11:22:07.183: INFO: Waiting for pod pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018 to disappear Jun 2 11:22:07.200: INFO: Pod pod-projected-configmaps-48723358-a4c3-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:22:07.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pbk52" for this suite. Jun 2 11:22:13.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:22:13.275: INFO: namespace: e2e-tests-projected-pbk52, resource: bindings, ignored listing per whitelist Jun 2 11:22:13.300: INFO: namespace e2e-tests-projected-pbk52 deletion completed in 6.096047562s • [SLOW TEST:10.523 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:22:13.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-4eb75035-a4c3-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:22:13.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-l8k65" to be "success or failure" Jun 2 11:22:13.548: INFO: Pod "pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.88923ms Jun 2 11:22:15.601: INFO: Pod "pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07545357s Jun 2 11:22:17.605: INFO: Pod "pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078841591s STEP: Saw pod success Jun 2 11:22:17.605: INFO: Pod "pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:22:17.607: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 11:22:17.687: INFO: Waiting for pod pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018 to disappear Jun 2 11:22:17.720: INFO: Pod pod-projected-configmaps-4eb97efd-a4c3-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:22:17.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8k65" for this suite. Jun 2 11:22:23.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:22:23.800: INFO: namespace: e2e-tests-projected-l8k65, resource: bindings, ignored listing per whitelist Jun 2 11:22:23.840: INFO: namespace e2e-tests-projected-l8k65 deletion completed in 6.115917797s • [SLOW TEST:10.540 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:22:23.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 2 11:22:23.919: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 2 11:22:23.953: INFO: Waiting for terminating namespaces to be deleted... Jun 2 11:22:23.956: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 2 11:22:23.962: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 2 11:22:23.962: INFO: Container kube-proxy ready: true, restart count 0 Jun 2 11:22:23.962: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 11:22:23.962: INFO: Container kindnet-cni ready: true, restart count 0 Jun 2 11:22:23.962: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 2 11:22:23.962: INFO: Container coredns ready: true, restart count 0 Jun 2 11:22:23.962: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 2 11:22:23.987: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 11:22:23.987: INFO: Container kindnet-cni ready: true, restart count 0 Jun 2 11:22:23.987: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 2 11:22:23.987: INFO: Container coredns ready: true, restart count 0 Jun 2 11:22:23.987: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 11:22:23.987: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-576b895f-a4c3-11ea-889d-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-576b895f-a4c3-11ea-889d-0242ac110018 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-576b895f-a4c3-11ea-889d-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:22:32.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-5984x" for this suite. Jun 2 11:23:00.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:23:00.327: INFO: namespace: e2e-tests-sched-pred-5984x, resource: bindings, ignored listing per whitelist Jun 2 11:23:00.363: INFO: namespace e2e-tests-sched-pred-5984x deletion completed in 28.169827341s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:36.523 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:23:00.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:23:00.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-5nz5v" to be "success or failure" Jun 2 11:23:00.566: INFO: Pod "downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.329858ms Jun 2 11:23:02.570: INFO: Pod "downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007237304s Jun 2 11:23:04.574: INFO: Pod "downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011633158s STEP: Saw pod success Jun 2 11:23:04.574: INFO: Pod "downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:23:04.578: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:23:04.614: INFO: Waiting for pod downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018 to disappear Jun 2 11:23:04.620: INFO: Pod downwardapi-volume-6acade69-a4c3-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:23:04.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5nz5v" for this suite. Jun 2 11:23:10.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:23:10.693: INFO: namespace: e2e-tests-projected-5nz5v, resource: bindings, ignored listing per whitelist Jun 2 11:23:10.722: INFO: namespace e2e-tests-projected-5nz5v deletion completed in 6.097960253s • [SLOW TEST:10.358 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:23:10.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-7dcm9 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-7dcm9 STEP: Deleting pre-stop pod Jun 2 11:23:23.899: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:23:23.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-7dcm9" for this suite. Jun 2 11:24:01.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:24:02.033: INFO: namespace: e2e-tests-prestop-7dcm9, resource: bindings, ignored listing per whitelist Jun 2 11:24:02.037: INFO: namespace e2e-tests-prestop-7dcm9 deletion completed in 38.109719738s • [SLOW TEST:51.315 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:24:02.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 2 11:24:02.211: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rmb4j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rmb4j/configmaps/e2e-watch-test-label-changed,UID:8f8266db-a4c3-11ea-99e8-0242ac110002,ResourceVersion:13823283,Generation:0,CreationTimestamp:2020-06-02 11:24:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 2 11:24:02.211: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rmb4j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rmb4j/configmaps/e2e-watch-test-label-changed,UID:8f8266db-a4c3-11ea-99e8-0242ac110002,ResourceVersion:13823284,Generation:0,CreationTimestamp:2020-06-02 11:24:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 2 11:24:02.211: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rmb4j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rmb4j/configmaps/e2e-watch-test-label-changed,UID:8f8266db-a4c3-11ea-99e8-0242ac110002,ResourceVersion:13823285,Generation:0,CreationTimestamp:2020-06-02 11:24:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 2 11:24:12.282: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rmb4j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rmb4j/configmaps/e2e-watch-test-label-changed,UID:8f8266db-a4c3-11ea-99e8-0242ac110002,ResourceVersion:13823306,Generation:0,CreationTimestamp:2020-06-02 11:24:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 2 11:24:12.282: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rmb4j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rmb4j/configmaps/e2e-watch-test-label-changed,UID:8f8266db-a4c3-11ea-99e8-0242ac110002,ResourceVersion:13823307,Generation:0,CreationTimestamp:2020-06-02 11:24:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 2 11:24:12.282: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-rmb4j,SelfLink:/api/v1/namespaces/e2e-tests-watch-rmb4j/configmaps/e2e-watch-test-label-changed,UID:8f8266db-a4c3-11ea-99e8-0242ac110002,ResourceVersion:13823308,Generation:0,CreationTimestamp:2020-06-02 11:24:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:24:12.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rmb4j" for this suite. Jun 2 11:24:18.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:24:18.384: INFO: namespace: e2e-tests-watch-rmb4j, resource: bindings, ignored listing per whitelist Jun 2 11:24:18.385: INFO: namespace e2e-tests-watch-rmb4j deletion completed in 6.098746605s • [SLOW TEST:16.348 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:24:18.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:24:42.590: INFO: Container started at 2020-06-02 11:24:21 +0000 UTC, pod became ready at 2020-06-02 11:24:42 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:24:42.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-86xtr" for this suite. Jun 2 11:25:04.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:25:04.686: INFO: namespace: e2e-tests-container-probe-86xtr, resource: bindings, ignored listing per whitelist Jun 2 11:25:04.726: INFO: namespace e2e-tests-container-probe-86xtr deletion completed in 22.133021589s • [SLOW TEST:46.341 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:25:04.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0602 11:25:45.670769 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 11:25:45.670: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:25:45.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-dnxwz" for this suite. Jun 2 11:25:53.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:25:53.745: INFO: namespace: e2e-tests-gc-dnxwz, resource: bindings, ignored listing per whitelist Jun 2 11:25:53.775: INFO: namespace e2e-tests-gc-dnxwz deletion completed in 8.101385839s • [SLOW TEST:49.048 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:25:53.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d229504e-a4c3-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:25:54.175: INFO: Waiting up to 5m0s for pod "pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-6lht5" to be "success or failure" Jun 2 11:25:54.323: INFO: Pod "pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 148.191261ms Jun 2 11:25:56.327: INFO: Pod "pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152072951s Jun 2 11:25:58.330: INFO: Pod "pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155875946s STEP: Saw pod success Jun 2 11:25:58.331: INFO: Pod "pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:25:58.333: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 11:25:58.490: INFO: Waiting for pod pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018 to disappear Jun 2 11:25:58.512: INFO: Pod pod-configmaps-d22e2d78-a4c3-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:25:58.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6lht5" for this suite. Jun 2 11:26:04.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:26:04.604: INFO: namespace: e2e-tests-configmap-6lht5, resource: bindings, ignored listing per whitelist Jun 2 11:26:04.650: INFO: namespace e2e-tests-configmap-6lht5 deletion completed in 6.132968162s • [SLOW TEST:10.874 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:26:04.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 2 11:26:04.764: INFO: Waiting up to 5m0s for pod "pod-d891e903-a4c3-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-9qz9h" to be "success or failure" Jun 2 11:26:04.795: INFO: Pod "pod-d891e903-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.333075ms Jun 2 11:26:06.893: INFO: Pod "pod-d891e903-a4c3-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129250912s Jun 2 11:26:08.896: INFO: Pod "pod-d891e903-a4c3-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132885987s STEP: Saw pod success Jun 2 11:26:08.896: INFO: Pod "pod-d891e903-a4c3-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:26:08.899: INFO: Trying to get logs from node hunter-worker pod pod-d891e903-a4c3-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:26:08.936: INFO: Waiting for pod pod-d891e903-a4c3-11ea-889d-0242ac110018 to disappear Jun 2 11:26:08.996: INFO: Pod pod-d891e903-a4c3-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:26:08.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9qz9h" for this suite. Jun 2 11:26:15.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:26:15.211: INFO: namespace: e2e-tests-emptydir-9qz9h, resource: bindings, ignored listing per whitelist Jun 2 11:26:15.233: INFO: namespace e2e-tests-emptydir-9qz9h deletion completed in 6.234597195s • [SLOW TEST:10.584 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:26:15.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0602 11:26:26.932173 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 11:26:26.932: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:26:26.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-psctg" for this suite. Jun 2 11:26:35.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:26:35.133: INFO: namespace: e2e-tests-gc-psctg, resource: bindings, ignored listing per whitelist Jun 2 11:26:35.182: INFO: namespace e2e-tests-gc-psctg deletion completed in 8.246748668s • [SLOW TEST:19.948 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:26:35.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2kth9 Jun 2 11:26:39.327: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2kth9 STEP: checking the pod's current state and verifying that restartCount is present Jun 2 11:26:39.330: INFO: Initial restart count of pod liveness-http is 0 Jun 2 11:26:59.423: INFO: Restart count of pod e2e-tests-container-probe-2kth9/liveness-http is now 1 (20.092856813s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:26:59.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2kth9" for this suite. Jun 2 11:27:05.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:27:05.542: INFO: namespace: e2e-tests-container-probe-2kth9, resource: bindings, ignored listing per whitelist Jun 2 11:27:05.610: INFO: namespace e2e-tests-container-probe-2kth9 deletion completed in 6.14397891s • [SLOW TEST:30.428 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:27:05.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 2 11:27:05.708: INFO: PodSpec: initContainers in spec.initContainers Jun 2 11:27:55.732: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fceb6568-a4c3-11ea-889d-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-nxm7r", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-nxm7r/pods/pod-init-fceb6568-a4c3-11ea-889d-0242ac110018", UID:"fcec0634-a4c3-11ea-99e8-0242ac110002", ResourceVersion:"13824256", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726694025, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"708486487"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-454jr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024f2000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-454jr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-454jr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-454jr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022167b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022e2000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002216850)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002216870)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002216878), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00221687c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726694025, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726694025, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726694025, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726694025, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.33", StartTime:(*v1.Time)(0xc001b42040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00256a070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00256a0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://05cb199f1aa3dda7e168f7a6d975aa70da60a32103897be5a8b94376d39b2877"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b42080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001b42060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:27:55.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-nxm7r" for this suite. Jun 2 11:28:17.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:28:17.885: INFO: namespace: e2e-tests-init-container-nxm7r, resource: bindings, ignored listing per whitelist Jun 2 11:28:17.897: INFO: namespace e2e-tests-init-container-nxm7r deletion completed in 22.126151708s • [SLOW TEST:72.287 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:28:17.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 2 11:28:22.629: INFO: Successfully updated pod "pod-update-2802cb8b-a4c4-11ea-889d-0242ac110018" STEP: verifying the updated pod is in kubernetes Jun 2 11:28:22.646: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:28:22.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hpclh" for this suite. Jun 2 11:28:44.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:28:44.701: INFO: namespace: e2e-tests-pods-hpclh, resource: bindings, ignored listing per whitelist Jun 2 11:28:44.750: INFO: namespace e2e-tests-pods-hpclh deletion completed in 22.100341079s • [SLOW TEST:26.853 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:28:44.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0602 11:29:15.447048 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 11:29:15.447: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:29:15.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4nhgp" for this suite. Jun 2 11:29:23.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:29:23.543: INFO: namespace: e2e-tests-gc-4nhgp, resource: bindings, ignored listing per whitelist Jun 2 11:29:23.548: INFO: namespace e2e-tests-gc-4nhgp deletion completed in 8.097786271s • [SLOW TEST:38.798 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:29:23.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jun 2 11:29:27.728: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:29:51.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-klqcx" for this suite. Jun 2 11:29:57.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:29:57.861: INFO: namespace: e2e-tests-namespaces-klqcx, resource: bindings, ignored listing per whitelist Jun 2 11:29:57.917: INFO: namespace e2e-tests-namespaces-klqcx deletion completed in 6.099575104s STEP: Destroying namespace "e2e-tests-nsdeletetest-g4xw6" for this suite. Jun 2 11:29:57.919: INFO: Namespace e2e-tests-nsdeletetest-g4xw6 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4lmwn" for this suite. Jun 2 11:30:03.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:30:03.997: INFO: namespace: e2e-tests-nsdeletetest-4lmwn, resource: bindings, ignored listing per whitelist Jun 2 11:30:04.016: INFO: namespace e2e-tests-nsdeletetest-4lmwn deletion completed in 6.096301527s • [SLOW TEST:40.467 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:30:04.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 2 11:30:04.221: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:30:10.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2ls27" for this suite. Jun 2 11:30:16.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:30:17.002: INFO: namespace: e2e-tests-init-container-2ls27, resource: bindings, ignored listing per whitelist Jun 2 11:30:17.014: INFO: namespace e2e-tests-init-container-2ls27 deletion completed in 6.08588453s • [SLOW TEST:12.998 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:30:17.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 2 11:30:17.125: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:30:25.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-snjkn" for this suite. Jun 2 11:30:31.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:30:31.122: INFO: namespace: e2e-tests-init-container-snjkn, resource: bindings, ignored listing per whitelist Jun 2 11:30:31.165: INFO: namespace e2e-tests-init-container-snjkn deletion completed in 6.100285779s • [SLOW TEST:14.151 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:30:31.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 2 11:30:31.282: INFO: Waiting up to 5m0s for pod "downward-api-776e7fba-a4c4-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-knh2p" to be "success or failure" Jun 2 11:30:31.308: INFO: Pod "downward-api-776e7fba-a4c4-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.848482ms Jun 2 11:30:33.312: INFO: Pod "downward-api-776e7fba-a4c4-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029814959s Jun 2 11:30:35.316: INFO: Pod "downward-api-776e7fba-a4c4-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034221213s STEP: Saw pod success Jun 2 11:30:35.316: INFO: Pod "downward-api-776e7fba-a4c4-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:30:35.319: INFO: Trying to get logs from node hunter-worker pod downward-api-776e7fba-a4c4-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:30:35.351: INFO: Waiting for pod downward-api-776e7fba-a4c4-11ea-889d-0242ac110018 to disappear Jun 2 11:30:35.365: INFO: Pod downward-api-776e7fba-a4c4-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:30:35.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-knh2p" for this suite. Jun 2 11:30:41.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:30:41.458: INFO: namespace: e2e-tests-downward-api-knh2p, resource: bindings, ignored listing per whitelist Jun 2 11:30:41.472: INFO: namespace e2e-tests-downward-api-knh2p deletion completed in 6.104100164s • [SLOW TEST:10.306 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:30:41.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:30:45.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-n7npr" for this suite. Jun 2 11:31:35.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:31:35.769: INFO: namespace: e2e-tests-kubelet-test-n7npr, resource: bindings, ignored listing per whitelist Jun 2 11:31:35.777: INFO: namespace e2e-tests-kubelet-test-n7npr deletion completed in 50.148507165s • [SLOW TEST:54.305 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:31:35.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:31:39.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-82rj7" for this suite. Jun 2 11:32:21.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:32:21.964: INFO: namespace: e2e-tests-kubelet-test-82rj7, resource: bindings, ignored listing per whitelist Jun 2 11:32:21.997: INFO: namespace e2e-tests-kubelet-test-82rj7 deletion completed in 42.098090281s • [SLOW TEST:46.219 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:32:21.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 2 11:32:22.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825068,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 2 11:32:22.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825068,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 2 11:32:32.093: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825088,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 2 11:32:32.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825088,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 2 11:32:42.102: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825108,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 2 11:32:42.102: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825108,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 2 11:32:52.109: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825128,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 2 11:32:52.109: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-a,UID:b97e64d9-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825128,Generation:0,CreationTimestamp:2020-06-02 11:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 2 11:33:02.117: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-b,UID:d15a4488-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825148,Generation:0,CreationTimestamp:2020-06-02 11:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 2 11:33:02.118: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-b,UID:d15a4488-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825148,Generation:0,CreationTimestamp:2020-06-02 11:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 2 11:33:12.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-b,UID:d15a4488-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825168,Generation:0,CreationTimestamp:2020-06-02 11:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 2 11:33:12.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-74sc9,SelfLink:/api/v1/namespaces/e2e-tests-watch-74sc9/configmaps/e2e-watch-test-configmap-b,UID:d15a4488-a4c4-11ea-99e8-0242ac110002,ResourceVersion:13825168,Generation:0,CreationTimestamp:2020-06-02 11:33:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:33:22.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-74sc9" for this suite. Jun 2 11:33:28.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:33:28.183: INFO: namespace: e2e-tests-watch-74sc9, resource: bindings, ignored listing per whitelist Jun 2 11:33:28.221: INFO: namespace e2e-tests-watch-74sc9 deletion completed in 6.091940671s • [SLOW TEST:66.223 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:33:28.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jun 2 11:33:28.861: INFO: created pod pod-service-account-defaultsa Jun 2 11:33:28.861: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 2 11:33:28.874: INFO: created pod pod-service-account-mountsa Jun 2 11:33:28.874: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 2 11:33:28.880: INFO: created pod pod-service-account-nomountsa Jun 2 11:33:28.880: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 2 11:33:28.965: INFO: created pod pod-service-account-defaultsa-mountspec Jun 2 11:33:28.965: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 2 11:33:28.978: INFO: created pod pod-service-account-mountsa-mountspec Jun 2 11:33:28.978: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 2 11:33:29.006: INFO: created pod pod-service-account-nomountsa-mountspec Jun 2 11:33:29.006: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 2 11:33:29.044: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 2 11:33:29.044: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 2 11:33:29.115: INFO: created pod pod-service-account-mountsa-nomountspec Jun 2 11:33:29.115: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 2 11:33:29.123: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 2 11:33:29.123: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:33:29.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-s646q" for this suite. Jun 2 11:33:57.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:33:57.373: INFO: namespace: e2e-tests-svcaccounts-s646q, resource: bindings, ignored listing per whitelist Jun 2 11:33:57.429: INFO: namespace e2e-tests-svcaccounts-s646q deletion completed in 28.150784521s • [SLOW TEST:29.207 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:33:57.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 2 11:33:57.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4425f' Jun 2 11:33:59.847: INFO: stderr: "" Jun 2 11:33:59.847: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 2 11:34:00.852: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:34:00.852: INFO: Found 0 / 1 Jun 2 11:34:01.851: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:34:01.851: INFO: Found 0 / 1 Jun 2 11:34:02.852: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:34:02.852: INFO: Found 0 / 1 Jun 2 11:34:03.852: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:34:03.852: INFO: Found 1 / 1 Jun 2 11:34:03.852: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 2 11:34:03.855: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:34:03.855: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 2 11:34:03.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-ss72w --namespace=e2e-tests-kubectl-4425f -p {"metadata":{"annotations":{"x":"y"}}}' Jun 2 11:34:03.961: INFO: stderr: "" Jun 2 11:34:03.961: INFO: stdout: "pod/redis-master-ss72w patched\n" STEP: checking annotations Jun 2 11:34:03.967: INFO: Selector matched 1 pods for map[app:redis] Jun 2 11:34:03.967: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:34:03.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4425f" for this suite. Jun 2 11:34:25.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:34:26.108: INFO: namespace: e2e-tests-kubectl-4425f, resource: bindings, ignored listing per whitelist Jun 2 11:34:26.115: INFO: namespace e2e-tests-kubectl-4425f deletion completed in 22.145331119s • [SLOW TEST:28.686 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:34:26.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:34:26.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-sm8xj" to be "success or failure" Jun 2 11:34:26.307: INFO: Pod "downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.519103ms Jun 2 11:34:28.313: INFO: Pod "downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016823479s Jun 2 11:34:30.318: INFO: Pod "downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021386135s STEP: Saw pod success Jun 2 11:34:30.318: INFO: Pod "downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:34:30.321: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:34:30.368: INFO: Waiting for pod downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018 to disappear Jun 2 11:34:30.420: INFO: Pod downwardapi-volume-03842891-a4c5-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:34:30.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sm8xj" for this suite. Jun 2 11:34:36.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:34:36.493: INFO: namespace: e2e-tests-downward-api-sm8xj, resource: bindings, ignored listing per whitelist Jun 2 11:34:36.537: INFO: namespace e2e-tests-downward-api-sm8xj deletion completed in 6.11335647s • [SLOW TEST:10.422 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:34:36.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-09b56144-a4c5-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:34:36.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-kkw5v" to be "success or failure" Jun 2 11:34:36.674: INFO: Pod "pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.13596ms Jun 2 11:34:38.679: INFO: Pod "pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009392945s Jun 2 11:34:40.683: INFO: Pod "pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.013476183s Jun 2 11:34:42.687: INFO: Pod "pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017685026s STEP: Saw pod success Jun 2 11:34:42.687: INFO: Pod "pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:34:42.690: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 11:34:42.719: INFO: Waiting for pod pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018 to disappear Jun 2 11:34:42.728: INFO: Pod pod-configmaps-09b5c406-a4c5-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:34:42.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kkw5v" for this suite. Jun 2 11:34:48.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:34:48.762: INFO: namespace: e2e-tests-configmap-kkw5v, resource: bindings, ignored listing per whitelist Jun 2 11:34:48.828: INFO: namespace e2e-tests-configmap-kkw5v deletion completed in 6.096841912s • [SLOW TEST:12.291 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:34:48.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-xxhd4 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jun 2 11:34:49.008: INFO: Found 0 stateful pods, waiting for 3 Jun 2 11:34:59.012: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:34:59.012: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:34:59.012: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 2 11:35:09.013: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:35:09.013: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:35:09.013: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 2 11:35:09.041: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 2 11:35:19.084: INFO: Updating stateful set ss2 Jun 2 11:35:19.116: INFO: Waiting for Pod e2e-tests-statefulset-xxhd4/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 2 11:35:29.209: INFO: Found 2 stateful pods, waiting for 3 Jun 2 11:35:39.213: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:35:39.213: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:35:39.213: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 2 11:35:39.234: INFO: Updating stateful set ss2 Jun 2 11:35:39.249: INFO: Waiting for Pod e2e-tests-statefulset-xxhd4/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 2 11:35:49.273: INFO: Updating stateful set ss2 Jun 2 11:35:49.288: INFO: Waiting for StatefulSet e2e-tests-statefulset-xxhd4/ss2 to complete update Jun 2 11:35:49.288: INFO: Waiting for Pod e2e-tests-statefulset-xxhd4/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 2 11:35:59.297: INFO: Deleting all statefulset in ns e2e-tests-statefulset-xxhd4 Jun 2 11:35:59.301: INFO: Scaling statefulset ss2 to 0 Jun 2 11:36:29.322: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 11:36:29.326: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:36:29.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-xxhd4" for this suite. Jun 2 11:36:37.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:36:37.405: INFO: namespace: e2e-tests-statefulset-xxhd4, resource: bindings, ignored listing per whitelist Jun 2 11:36:37.448: INFO: namespace e2e-tests-statefulset-xxhd4 deletion completed in 8.107235194s • [SLOW TEST:108.619 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:36:37.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 2 11:36:42.092: INFO: Successfully updated pod "annotationupdate51c21827-a4c5-11ea-889d-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:36:44.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7qw9n" for this suite. Jun 2 11:37:06.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:37:06.247: INFO: namespace: e2e-tests-downward-api-7qw9n, resource: bindings, ignored listing per whitelist Jun 2 11:37:06.262: INFO: namespace e2e-tests-downward-api-7qw9n deletion completed in 22.133552489s • [SLOW TEST:28.814 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:37:06.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jun 2 11:37:06.356: INFO: Waiting up to 5m0s for pod "var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018" in namespace "e2e-tests-var-expansion-fw85x" to be "success or failure" Jun 2 11:37:06.361: INFO: Pod "var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354279ms Jun 2 11:37:08.365: INFO: Pod "var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008132892s Jun 2 11:37:10.368: INFO: Pod "var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011960949s STEP: Saw pod success Jun 2 11:37:10.368: INFO: Pod "var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:37:10.372: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:37:10.522: INFO: Waiting for pod var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018 to disappear Jun 2 11:37:10.525: INFO: Pod var-expansion-62ed68c0-a4c5-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:37:10.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-fw85x" for this suite. Jun 2 11:37:16.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:37:16.688: INFO: namespace: e2e-tests-var-expansion-fw85x, resource: bindings, ignored listing per whitelist Jun 2 11:37:16.695: INFO: namespace e2e-tests-var-expansion-fw85x deletion completed in 6.166253081s • [SLOW TEST:10.433 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:37:16.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:37:16.916: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 2 11:37:21.929: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 2 11:37:21.929: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 2 11:37:21.987: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-9sqqv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9sqqv/deployments/test-cleanup-deployment,UID:6c3c2add-a4c5-11ea-99e8-0242ac110002,ResourceVersion:13826159,Generation:1,CreationTimestamp:2020-06-02 11:37:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 2 11:37:22.001: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jun 2 11:37:22.001: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 2 11:37:22.002: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-9sqqv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9sqqv/replicasets/test-cleanup-controller,UID:6933419d-a4c5-11ea-99e8-0242ac110002,ResourceVersion:13826160,Generation:1,CreationTimestamp:2020-06-02 11:37:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6c3c2add-a4c5-11ea-99e8-0242ac110002 0xc0021e1e77 0xc0021e1e78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 2 11:37:22.008: INFO: Pod "test-cleanup-controller-f898c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-f898c,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-9sqqv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9sqqv/pods/test-cleanup-controller-f898c,UID:693af615-a4c5-11ea-99e8-0242ac110002,ResourceVersion:13826153,Generation:0,CreationTimestamp:2020-06-02 11:37:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 6933419d-a4c5-11ea-99e8-0242ac110002 0xc00249a597 0xc00249a598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6mfkq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6mfkq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6mfkq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00249a610} {node.kubernetes.io/unreachable Exists NoExecute 0xc00249a630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:37:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:37:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:37:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:37:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.51,StartTime:2020-06-02 11:37:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-02 11:37:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://53d9dc486e5f6769bd00f055b40027c93852eb767a08d0fdefb1163b7b0fa239}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:37:22.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-9sqqv" for this suite. Jun 2 11:37:28.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:37:28.185: INFO: namespace: e2e-tests-deployment-9sqqv, resource: bindings, ignored listing per whitelist Jun 2 11:37:28.204: INFO: namespace e2e-tests-deployment-9sqqv deletion completed in 6.116603269s • [SLOW TEST:11.509 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:37:28.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7008c815-a4c5-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 11:37:28.357: INFO: Waiting up to 5m0s for pod "pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-tlzvt" to be "success or failure" Jun 2 11:37:28.375: INFO: Pod "pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.221595ms Jun 2 11:37:30.380: INFO: Pod "pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022736906s Jun 2 11:37:32.385: INFO: Pod "pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027453769s STEP: Saw pod success Jun 2 11:37:32.385: INFO: Pod "pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:37:32.388: INFO: Trying to get logs from node hunter-worker pod pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018 container secret-env-test: STEP: delete the pod Jun 2 11:37:32.411: INFO: Waiting for pod pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018 to disappear Jun 2 11:37:32.415: INFO: Pod pod-secrets-700afd4c-a4c5-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:37:32.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tlzvt" for this suite. Jun 2 11:37:38.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:37:38.492: INFO: namespace: e2e-tests-secrets-tlzvt, resource: bindings, ignored listing per whitelist Jun 2 11:37:38.506: INFO: namespace e2e-tests-secrets-tlzvt deletion completed in 6.087494018s • [SLOW TEST:10.302 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:37:38.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 2 11:37:43.408: INFO: Successfully updated pod "labelsupdate7640085d-a4c5-11ea-889d-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:37:45.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zcspn" for this suite. Jun 2 11:38:07.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:38:07.499: INFO: namespace: e2e-tests-projected-zcspn, resource: bindings, ignored listing per whitelist Jun 2 11:38:07.547: INFO: namespace e2e-tests-projected-zcspn deletion completed in 22.104510566s • [SLOW TEST:29.042 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:38:07.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 2 11:38:07.679: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 2 11:38:12.683: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:38:13.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-8bq5j" for this suite. Jun 2 11:38:21.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:38:21.987: INFO: namespace: e2e-tests-replication-controller-8bq5j, resource: bindings, ignored listing per whitelist Jun 2 11:38:22.019: INFO: namespace e2e-tests-replication-controller-8bq5j deletion completed in 8.160805364s • [SLOW TEST:14.471 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:38:22.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:38:22.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-9cd9v" to be "success or failure" Jun 2 11:38:22.161: INFO: Pod "downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.727205ms Jun 2 11:38:24.164: INFO: Pod "downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021388297s Jun 2 11:38:26.170: INFO: Pod "downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026807386s STEP: Saw pod success Jun 2 11:38:26.170: INFO: Pod "downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:38:26.173: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:38:26.214: INFO: Waiting for pod downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018 to disappear Jun 2 11:38:26.249: INFO: Pod downwardapi-volume-901a0789-a4c5-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:38:26.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9cd9v" for this suite. Jun 2 11:38:32.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:38:32.343: INFO: namespace: e2e-tests-downward-api-9cd9v, resource: bindings, ignored listing per whitelist Jun 2 11:38:32.367: INFO: namespace e2e-tests-downward-api-9cd9v deletion completed in 6.114041716s • [SLOW TEST:10.348 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:38:32.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 2 11:38:32.515: INFO: Waiting up to 5m0s for pod "downward-api-9647758f-a4c5-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-nm7rd" to be "success or failure" Jun 2 11:38:32.519: INFO: Pod "downward-api-9647758f-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681601ms Jun 2 11:38:34.523: INFO: Pod "downward-api-9647758f-a4c5-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007985012s Jun 2 11:38:36.527: INFO: Pod "downward-api-9647758f-a4c5-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012408115s STEP: Saw pod success Jun 2 11:38:36.527: INFO: Pod "downward-api-9647758f-a4c5-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:38:36.531: INFO: Trying to get logs from node hunter-worker pod downward-api-9647758f-a4c5-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:38:36.570: INFO: Waiting for pod downward-api-9647758f-a4c5-11ea-889d-0242ac110018 to disappear Jun 2 11:38:36.578: INFO: Pod downward-api-9647758f-a4c5-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:38:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nm7rd" for this suite. Jun 2 11:38:42.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:38:42.643: INFO: namespace: e2e-tests-downward-api-nm7rd, resource: bindings, ignored listing per whitelist Jun 2 11:38:42.761: INFO: namespace e2e-tests-downward-api-nm7rd deletion completed in 6.179257451s • [SLOW TEST:10.394 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:38:42.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-9c79d34e-a4c5-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:38:46.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k24zx" for this suite. Jun 2 11:39:08.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:39:09.035: INFO: namespace: e2e-tests-configmap-k24zx, resource: bindings, ignored listing per whitelist Jun 2 11:39:09.073: INFO: namespace e2e-tests-configmap-k24zx deletion completed in 22.097496394s • [SLOW TEST:26.312 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:39:09.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 11:39:09.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-ffsc5' Jun 2 11:39:09.267: INFO: stderr: "" Jun 2 11:39:09.267: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jun 2 11:39:09.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-ffsc5' Jun 2 11:39:21.270: INFO: stderr: "" Jun 2 11:39:21.270: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:39:21.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ffsc5" for this suite. Jun 2 11:39:27.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:39:27.358: INFO: namespace: e2e-tests-kubectl-ffsc5, resource: bindings, ignored listing per whitelist Jun 2 11:39:27.366: INFO: namespace e2e-tests-kubectl-ffsc5 deletion completed in 6.092386401s • [SLOW TEST:18.292 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:39:27.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-88kvg [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-88kvg STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-88kvg Jun 2 11:39:27.501: INFO: Found 0 stateful pods, waiting for 1 Jun 2 11:39:37.506: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 2 11:39:37.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 11:39:37.746: INFO: stderr: "I0602 11:39:37.630207 1602 log.go:172] (0xc0007f8160) (0xc0006fc640) Create stream\nI0602 11:39:37.630259 1602 log.go:172] (0xc0007f8160) (0xc0006fc640) Stream added, broadcasting: 1\nI0602 11:39:37.631956 1602 log.go:172] (0xc0007f8160) Reply frame received for 1\nI0602 11:39:37.631991 1602 log.go:172] (0xc0007f8160) (0xc0005d4d20) Create stream\nI0602 11:39:37.631999 1602 log.go:172] (0xc0007f8160) (0xc0005d4d20) Stream added, broadcasting: 3\nI0602 11:39:37.632712 1602 log.go:172] (0xc0007f8160) Reply frame received for 3\nI0602 11:39:37.632738 1602 log.go:172] (0xc0007f8160) (0xc0006fc6e0) Create stream\nI0602 11:39:37.632745 1602 log.go:172] (0xc0007f8160) (0xc0006fc6e0) Stream added, broadcasting: 5\nI0602 11:39:37.633555 1602 log.go:172] (0xc0007f8160) Reply frame received for 5\nI0602 11:39:37.738654 1602 log.go:172] (0xc0007f8160) Data frame received for 5\nI0602 11:39:37.738690 1602 log.go:172] (0xc0006fc6e0) (5) Data frame handling\nI0602 11:39:37.738712 1602 log.go:172] (0xc0007f8160) Data frame received for 3\nI0602 11:39:37.738718 1602 log.go:172] (0xc0005d4d20) (3) Data frame handling\nI0602 11:39:37.738728 1602 log.go:172] (0xc0005d4d20) (3) Data frame sent\nI0602 11:39:37.738734 1602 log.go:172] (0xc0007f8160) Data frame received for 3\nI0602 11:39:37.738739 1602 log.go:172] (0xc0005d4d20) (3) Data frame handling\nI0602 11:39:37.740384 1602 log.go:172] (0xc0007f8160) Data frame received for 1\nI0602 11:39:37.740402 1602 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0602 11:39:37.740415 1602 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0602 11:39:37.740431 1602 log.go:172] (0xc0007f8160) (0xc0006fc640) Stream removed, broadcasting: 1\nI0602 11:39:37.740444 1602 log.go:172] (0xc0007f8160) Go away received\nI0602 11:39:37.740782 1602 log.go:172] (0xc0007f8160) (0xc0006fc640) Stream removed, broadcasting: 1\nI0602 11:39:37.740808 1602 log.go:172] (0xc0007f8160) (0xc0005d4d20) Stream removed, broadcasting: 3\nI0602 11:39:37.740821 1602 log.go:172] (0xc0007f8160) (0xc0006fc6e0) Stream removed, broadcasting: 5\n" Jun 2 11:39:37.746: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 11:39:37.746: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 11:39:37.750: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 2 11:39:47.754: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 2 11:39:47.754: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 11:39:47.771: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:39:47.771: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:39:47.771: INFO: Jun 2 11:39:47.771: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 2 11:39:48.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991153043s Jun 2 11:39:49.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985979234s Jun 2 11:39:50.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.893189659s Jun 2 11:39:51.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.888761363s Jun 2 11:39:52.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.882994537s Jun 2 11:39:53.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.878533541s Jun 2 11:39:54.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.872910769s Jun 2 11:39:55.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.868878643s Jun 2 11:39:56.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 863.838211ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-88kvg Jun 2 11:39:57.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:39:58.136: INFO: stderr: "I0602 11:39:58.067669 1625 log.go:172] (0xc0006ce420) (0xc0005ef400) Create stream\nI0602 11:39:58.067738 1625 log.go:172] (0xc0006ce420) (0xc0005ef400) Stream added, broadcasting: 1\nI0602 11:39:58.070209 1625 log.go:172] (0xc0006ce420) Reply frame received for 1\nI0602 11:39:58.070248 1625 log.go:172] (0xc0006ce420) (0xc00063e000) Create stream\nI0602 11:39:58.070257 1625 log.go:172] (0xc0006ce420) (0xc00063e000) Stream added, broadcasting: 3\nI0602 11:39:58.071131 1625 log.go:172] (0xc0006ce420) Reply frame received for 3\nI0602 11:39:58.071171 1625 log.go:172] (0xc0006ce420) (0xc0006c6000) Create stream\nI0602 11:39:58.071190 1625 log.go:172] (0xc0006ce420) (0xc0006c6000) Stream added, broadcasting: 5\nI0602 11:39:58.071882 1625 log.go:172] (0xc0006ce420) Reply frame received for 5\nI0602 11:39:58.129369 1625 log.go:172] (0xc0006ce420) Data frame received for 5\nI0602 11:39:58.129524 1625 log.go:172] (0xc0006c6000) (5) Data frame handling\nI0602 11:39:58.129560 1625 log.go:172] (0xc0006ce420) Data frame received for 3\nI0602 11:39:58.129576 1625 log.go:172] (0xc00063e000) (3) Data frame handling\nI0602 11:39:58.129592 1625 log.go:172] (0xc00063e000) (3) Data frame sent\nI0602 11:39:58.129607 1625 log.go:172] (0xc0006ce420) Data frame received for 3\nI0602 11:39:58.129618 1625 log.go:172] (0xc00063e000) (3) Data frame handling\nI0602 11:39:58.131211 1625 log.go:172] (0xc0006ce420) Data frame received for 1\nI0602 11:39:58.131248 1625 log.go:172] (0xc0005ef400) (1) Data frame handling\nI0602 11:39:58.131269 1625 log.go:172] (0xc0005ef400) (1) Data frame sent\nI0602 11:39:58.131293 1625 log.go:172] (0xc0006ce420) (0xc0005ef400) Stream removed, broadcasting: 1\nI0602 11:39:58.131323 1625 log.go:172] (0xc0006ce420) Go away received\nI0602 11:39:58.131548 1625 log.go:172] (0xc0006ce420) (0xc0005ef400) Stream removed, broadcasting: 1\nI0602 11:39:58.131584 1625 log.go:172] (0xc0006ce420) (0xc00063e000) Stream removed, broadcasting: 3\nI0602 11:39:58.131593 1625 log.go:172] (0xc0006ce420) (0xc0006c6000) Stream removed, broadcasting: 5\n" Jun 2 11:39:58.136: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 11:39:58.136: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 11:39:58.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:39:58.377: INFO: stderr: "I0602 11:39:58.279698 1647 log.go:172] (0xc00084a2c0) (0xc0006792c0) Create stream\nI0602 11:39:58.279747 1647 log.go:172] (0xc00084a2c0) (0xc0006792c0) Stream added, broadcasting: 1\nI0602 11:39:58.281704 1647 log.go:172] (0xc00084a2c0) Reply frame received for 1\nI0602 11:39:58.281744 1647 log.go:172] (0xc00084a2c0) (0xc000679360) Create stream\nI0602 11:39:58.281753 1647 log.go:172] (0xc00084a2c0) (0xc000679360) Stream added, broadcasting: 3\nI0602 11:39:58.282634 1647 log.go:172] (0xc00084a2c0) Reply frame received for 3\nI0602 11:39:58.282682 1647 log.go:172] (0xc00084a2c0) (0xc000679400) Create stream\nI0602 11:39:58.282698 1647 log.go:172] (0xc00084a2c0) (0xc000679400) Stream added, broadcasting: 5\nI0602 11:39:58.283452 1647 log.go:172] (0xc00084a2c0) Reply frame received for 5\nI0602 11:39:58.370414 1647 log.go:172] (0xc00084a2c0) Data frame received for 3\nI0602 11:39:58.370447 1647 log.go:172] (0xc000679360) (3) Data frame handling\nI0602 11:39:58.370470 1647 log.go:172] (0xc00084a2c0) Data frame received for 5\nI0602 11:39:58.370500 1647 log.go:172] (0xc000679400) (5) Data frame handling\nI0602 11:39:58.370518 1647 log.go:172] (0xc000679400) (5) Data frame sent\nI0602 11:39:58.370535 1647 log.go:172] (0xc00084a2c0) Data frame received for 5\nI0602 11:39:58.370552 1647 log.go:172] (0xc000679400) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0602 11:39:58.370665 1647 log.go:172] (0xc000679360) (3) Data frame sent\nI0602 11:39:58.370914 1647 log.go:172] (0xc00084a2c0) Data frame received for 3\nI0602 11:39:58.370921 1647 log.go:172] (0xc000679360) (3) Data frame handling\nI0602 11:39:58.372140 1647 log.go:172] (0xc00084a2c0) Data frame received for 1\nI0602 11:39:58.372164 1647 log.go:172] (0xc0006792c0) (1) Data frame handling\nI0602 11:39:58.372184 1647 log.go:172] (0xc0006792c0) (1) Data frame sent\nI0602 11:39:58.372210 1647 log.go:172] (0xc00084a2c0) (0xc0006792c0) Stream removed, broadcasting: 1\nI0602 11:39:58.372239 1647 log.go:172] (0xc00084a2c0) Go away received\nI0602 11:39:58.372430 1647 log.go:172] (0xc00084a2c0) (0xc0006792c0) Stream removed, broadcasting: 1\nI0602 11:39:58.372457 1647 log.go:172] (0xc00084a2c0) (0xc000679360) Stream removed, broadcasting: 3\nI0602 11:39:58.372473 1647 log.go:172] (0xc00084a2c0) (0xc000679400) Stream removed, broadcasting: 5\n" Jun 2 11:39:58.377: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 11:39:58.377: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 11:39:58.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:39:58.581: INFO: stderr: "I0602 11:39:58.495848 1669 log.go:172] (0xc0007cc2c0) (0xc00072c640) Create stream\nI0602 11:39:58.495914 1669 log.go:172] (0xc0007cc2c0) (0xc00072c640) Stream added, broadcasting: 1\nI0602 11:39:58.497911 1669 log.go:172] (0xc0007cc2c0) Reply frame received for 1\nI0602 11:39:58.497944 1669 log.go:172] (0xc0007cc2c0) (0xc000626e60) Create stream\nI0602 11:39:58.497952 1669 log.go:172] (0xc0007cc2c0) (0xc000626e60) Stream added, broadcasting: 3\nI0602 11:39:58.498694 1669 log.go:172] (0xc0007cc2c0) Reply frame received for 3\nI0602 11:39:58.498724 1669 log.go:172] (0xc0007cc2c0) (0xc00002a000) Create stream\nI0602 11:39:58.498733 1669 log.go:172] (0xc0007cc2c0) (0xc00002a000) Stream added, broadcasting: 5\nI0602 11:39:58.499396 1669 log.go:172] (0xc0007cc2c0) Reply frame received for 5\nI0602 11:39:58.574537 1669 log.go:172] (0xc0007cc2c0) Data frame received for 5\nI0602 11:39:58.574597 1669 log.go:172] (0xc00002a000) (5) Data frame handling\nI0602 11:39:58.574624 1669 log.go:172] (0xc00002a000) (5) Data frame sent\nI0602 11:39:58.574650 1669 log.go:172] (0xc0007cc2c0) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0602 11:39:58.574668 1669 log.go:172] (0xc00002a000) (5) Data frame handling\nI0602 11:39:58.574750 1669 log.go:172] (0xc0007cc2c0) Data frame received for 3\nI0602 11:39:58.574787 1669 log.go:172] (0xc000626e60) (3) Data frame handling\nI0602 11:39:58.574798 1669 log.go:172] (0xc000626e60) (3) Data frame sent\nI0602 11:39:58.574806 1669 log.go:172] (0xc0007cc2c0) Data frame received for 3\nI0602 11:39:58.574815 1669 log.go:172] (0xc000626e60) (3) Data frame handling\nI0602 11:39:58.576726 1669 log.go:172] (0xc0007cc2c0) Data frame received for 1\nI0602 11:39:58.576741 1669 log.go:172] (0xc00072c640) (1) Data frame handling\nI0602 11:39:58.576747 1669 log.go:172] (0xc00072c640) (1) Data frame sent\nI0602 11:39:58.576755 1669 log.go:172] (0xc0007cc2c0) (0xc00072c640) Stream removed, broadcasting: 1\nI0602 11:39:58.576865 1669 log.go:172] (0xc0007cc2c0) Go away received\nI0602 11:39:58.576909 1669 log.go:172] (0xc0007cc2c0) (0xc00072c640) Stream removed, broadcasting: 1\nI0602 11:39:58.577073 1669 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc000626e60), 0x5:(*spdystream.Stream)(0xc00002a000)}\nI0602 11:39:58.577325 1669 log.go:172] (0xc0007cc2c0) (0xc000626e60) Stream removed, broadcasting: 3\nI0602 11:39:58.577368 1669 log.go:172] (0xc0007cc2c0) (0xc00002a000) Stream removed, broadcasting: 5\n" Jun 2 11:39:58.582: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 11:39:58.582: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 11:39:58.586: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:39:58.586: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:39:58.586: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 2 11:39:58.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 11:39:58.788: INFO: stderr: "I0602 11:39:58.721951 1692 log.go:172] (0xc000796160) (0xc0006fc640) Create stream\nI0602 11:39:58.722016 1692 log.go:172] (0xc000796160) (0xc0006fc640) Stream added, broadcasting: 1\nI0602 11:39:58.724467 1692 log.go:172] (0xc000796160) Reply frame received for 1\nI0602 11:39:58.724507 1692 log.go:172] (0xc000796160) (0xc0007c0b40) Create stream\nI0602 11:39:58.724516 1692 log.go:172] (0xc000796160) (0xc0007c0b40) Stream added, broadcasting: 3\nI0602 11:39:58.725754 1692 log.go:172] (0xc000796160) Reply frame received for 3\nI0602 11:39:58.725802 1692 log.go:172] (0xc000796160) (0xc0006fc6e0) Create stream\nI0602 11:39:58.725819 1692 log.go:172] (0xc000796160) (0xc0006fc6e0) Stream added, broadcasting: 5\nI0602 11:39:58.726705 1692 log.go:172] (0xc000796160) Reply frame received for 5\nI0602 11:39:58.780574 1692 log.go:172] (0xc000796160) Data frame received for 5\nI0602 11:39:58.780629 1692 log.go:172] (0xc0006fc6e0) (5) Data frame handling\nI0602 11:39:58.780672 1692 log.go:172] (0xc000796160) Data frame received for 3\nI0602 11:39:58.780697 1692 log.go:172] (0xc0007c0b40) (3) Data frame handling\nI0602 11:39:58.780751 1692 log.go:172] (0xc0007c0b40) (3) Data frame sent\nI0602 11:39:58.780765 1692 log.go:172] (0xc000796160) Data frame received for 3\nI0602 11:39:58.780772 1692 log.go:172] (0xc0007c0b40) (3) Data frame handling\nI0602 11:39:58.782328 1692 log.go:172] (0xc000796160) Data frame received for 1\nI0602 11:39:58.782354 1692 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0602 11:39:58.782371 1692 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0602 11:39:58.782409 1692 log.go:172] (0xc000796160) (0xc0006fc640) Stream removed, broadcasting: 1\nI0602 11:39:58.782509 1692 log.go:172] (0xc000796160) Go away received\nI0602 11:39:58.782614 1692 log.go:172] (0xc000796160) (0xc0006fc640) Stream removed, broadcasting: 1\nI0602 11:39:58.782630 1692 log.go:172] (0xc000796160) (0xc0007c0b40) Stream removed, broadcasting: 3\nI0602 11:39:58.782640 1692 log.go:172] (0xc000796160) (0xc0006fc6e0) Stream removed, broadcasting: 5\n" Jun 2 11:39:58.788: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 11:39:58.788: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 11:39:58.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 11:39:59.019: INFO: stderr: "I0602 11:39:58.914369 1714 log.go:172] (0xc000138580) (0xc0001a4d20) Create stream\nI0602 11:39:58.914422 1714 log.go:172] (0xc000138580) (0xc0001a4d20) Stream added, broadcasting: 1\nI0602 11:39:58.916803 1714 log.go:172] (0xc000138580) Reply frame received for 1\nI0602 11:39:58.916838 1714 log.go:172] (0xc000138580) (0xc0007fa000) Create stream\nI0602 11:39:58.916856 1714 log.go:172] (0xc000138580) (0xc0007fa000) Stream added, broadcasting: 3\nI0602 11:39:58.917996 1714 log.go:172] (0xc000138580) Reply frame received for 3\nI0602 11:39:58.918035 1714 log.go:172] (0xc000138580) (0xc0007fa140) Create stream\nI0602 11:39:58.918043 1714 log.go:172] (0xc000138580) (0xc0007fa140) Stream added, broadcasting: 5\nI0602 11:39:58.919004 1714 log.go:172] (0xc000138580) Reply frame received for 5\nI0602 11:39:59.011858 1714 log.go:172] (0xc000138580) Data frame received for 3\nI0602 11:39:59.011910 1714 log.go:172] (0xc0007fa000) (3) Data frame handling\nI0602 11:39:59.011950 1714 log.go:172] (0xc0007fa000) (3) Data frame sent\nI0602 11:39:59.011965 1714 log.go:172] (0xc000138580) Data frame received for 3\nI0602 11:39:59.011972 1714 log.go:172] (0xc0007fa000) (3) Data frame handling\nI0602 11:39:59.012060 1714 log.go:172] (0xc000138580) Data frame received for 5\nI0602 11:39:59.012082 1714 log.go:172] (0xc0007fa140) (5) Data frame handling\nI0602 11:39:59.014192 1714 log.go:172] (0xc000138580) Data frame received for 1\nI0602 11:39:59.014246 1714 log.go:172] (0xc0001a4d20) (1) Data frame handling\nI0602 11:39:59.014282 1714 log.go:172] (0xc0001a4d20) (1) Data frame sent\nI0602 11:39:59.014310 1714 log.go:172] (0xc000138580) (0xc0001a4d20) Stream removed, broadcasting: 1\nI0602 11:39:59.014348 1714 log.go:172] (0xc000138580) Go away received\nI0602 11:39:59.014605 1714 log.go:172] (0xc000138580) (0xc0001a4d20) Stream removed, broadcasting: 1\nI0602 11:39:59.014635 1714 log.go:172] (0xc000138580) (0xc0007fa000) Stream removed, broadcasting: 3\nI0602 11:39:59.014652 1714 log.go:172] (0xc000138580) (0xc0007fa140) Stream removed, broadcasting: 5\n" Jun 2 11:39:59.019: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 11:39:59.019: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 11:39:59.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 11:39:59.257: INFO: stderr: "I0602 11:39:59.163612 1737 log.go:172] (0xc000138790) (0xc0007cf4a0) Create stream\nI0602 11:39:59.163669 1737 log.go:172] (0xc000138790) (0xc0007cf4a0) Stream added, broadcasting: 1\nI0602 11:39:59.165766 1737 log.go:172] (0xc000138790) Reply frame received for 1\nI0602 11:39:59.165813 1737 log.go:172] (0xc000138790) (0xc00050a000) Create stream\nI0602 11:39:59.165834 1737 log.go:172] (0xc000138790) (0xc00050a000) Stream added, broadcasting: 3\nI0602 11:39:59.166613 1737 log.go:172] (0xc000138790) Reply frame received for 3\nI0602 11:39:59.166649 1737 log.go:172] (0xc000138790) (0xc0005f8000) Create stream\nI0602 11:39:59.166666 1737 log.go:172] (0xc000138790) (0xc0005f8000) Stream added, broadcasting: 5\nI0602 11:39:59.167551 1737 log.go:172] (0xc000138790) Reply frame received for 5\nI0602 11:39:59.250520 1737 log.go:172] (0xc000138790) Data frame received for 3\nI0602 11:39:59.250573 1737 log.go:172] (0xc00050a000) (3) Data frame handling\nI0602 11:39:59.250605 1737 log.go:172] (0xc00050a000) (3) Data frame sent\nI0602 11:39:59.251447 1737 log.go:172] (0xc000138790) Data frame received for 5\nI0602 11:39:59.251493 1737 log.go:172] (0xc0005f8000) (5) Data frame handling\nI0602 11:39:59.251518 1737 log.go:172] (0xc000138790) Data frame received for 3\nI0602 11:39:59.251541 1737 log.go:172] (0xc00050a000) (3) Data frame handling\nI0602 11:39:59.253663 1737 log.go:172] (0xc000138790) Data frame received for 1\nI0602 11:39:59.253680 1737 log.go:172] (0xc0007cf4a0) (1) Data frame handling\nI0602 11:39:59.253689 1737 log.go:172] (0xc0007cf4a0) (1) Data frame sent\nI0602 11:39:59.253698 1737 log.go:172] (0xc000138790) (0xc0007cf4a0) Stream removed, broadcasting: 1\nI0602 11:39:59.253709 1737 log.go:172] (0xc000138790) Go away received\nI0602 11:39:59.253876 1737 log.go:172] (0xc000138790) (0xc0007cf4a0) Stream removed, broadcasting: 1\nI0602 11:39:59.253892 1737 log.go:172] (0xc000138790) (0xc00050a000) Stream removed, broadcasting: 3\nI0602 11:39:59.253907 1737 log.go:172] (0xc000138790) (0xc0005f8000) Stream removed, broadcasting: 5\n" Jun 2 11:39:59.257: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 11:39:59.257: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 11:39:59.257: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 11:39:59.271: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 2 11:40:09.279: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 2 11:40:09.279: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 2 11:40:09.279: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 2 11:40:09.294: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:09.294: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:09.294: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:09.294: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:09.294: INFO: Jun 2 11:40:09.294: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:10.299: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:10.299: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:10.299: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:10.299: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:10.299: INFO: Jun 2 11:40:10.299: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:11.304: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:11.304: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:11.304: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:11.304: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:11.304: INFO: Jun 2 11:40:11.304: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:12.309: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:12.309: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:12.309: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:12.309: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:12.309: INFO: Jun 2 11:40:12.309: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:13.314: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:13.314: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:13.314: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:13.314: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:13.314: INFO: Jun 2 11:40:13.314: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:14.319: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:14.319: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:14.319: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:14.319: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:14.319: INFO: Jun 2 11:40:14.319: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:15.324: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:15.324: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:15.324: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:15.324: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:15.324: INFO: Jun 2 11:40:15.324: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:16.329: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:16.329: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:16.330: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:16.330: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:16.330: INFO: Jun 2 11:40:16.330: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:17.336: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:17.336: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:17.336: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:17.336: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:17.336: INFO: Jun 2 11:40:17.336: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 2 11:40:18.341: INFO: POD NODE PHASE GRACE CONDITIONS Jun 2 11:40:18.341: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:27 +0000 UTC }] Jun 2 11:40:18.341: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:18.341: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:40:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:39:47 +0000 UTC }] Jun 2 11:40:18.341: INFO: Jun 2 11:40:18.341: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-88kvg Jun 2 11:40:19.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:40:19.482: INFO: rc: 1 Jun 2 11:40:19.482: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001e08300 exit status 1 true [0xc0024369e0 0xc0024369f8 0xc002436a10] [0xc0024369e0 0xc0024369f8 0xc002436a10] [0xc0024369f0 0xc002436a08] [0x935700 0x935700] 0xc0014ec5a0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 2 11:40:29.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:40:29.568: INFO: rc: 1 Jun 2 11:40:29.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a90cc0 exit status 1 true [0xc00106c740 0xc00106c758 0xc00106c770] [0xc00106c740 0xc00106c758 0xc00106c770] [0xc00106c750 0xc00106c768] [0x935700 0x935700] 0xc0016c0660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:40:39.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:40:39.672: INFO: rc: 1 Jun 2 11:40:39.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e08420 exit status 1 true [0xc002436a18 0xc002436a30 0xc002436a48] [0xc002436a18 0xc002436a30 0xc002436a48] [0xc002436a28 0xc002436a40] [0x935700 0x935700] 0xc0014ecae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:40:49.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:40:49.768: INFO: rc: 1 Jun 2 11:40:49.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e08600 exit status 1 true [0xc002436a50 0xc002436a68 0xc002436a80] [0xc002436a50 0xc002436a68 0xc002436a80] [0xc002436a60 0xc002436a78] [0x935700 0x935700] 0xc0014ecea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:40:59.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:40:59.859: INFO: rc: 1 Jun 2 11:40:59.859: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e08840 exit status 1 true [0xc002436a88 0xc002436aa0 0xc002436ab8] [0xc002436a88 0xc002436aa0 0xc002436ab8] [0xc002436a98 0xc002436ab0] [0x935700 0x935700] 0xc00199a0c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:41:09.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:41:09.947: INFO: rc: 1 Jun 2 11:41:09.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000010ff0 exit status 1 true [0xc001d06068 0xc001d06080 0xc001d06098] [0xc001d06068 0xc001d06080 0xc001d06098] [0xc001d06078 0xc001d06090] [0x935700 0x935700] 0xc001f4b380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:41:19.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:41:20.031: INFO: rc: 1 Jun 2 11:41:20.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392120 exit status 1 true [0xc00016e000 0xc00016ec60 0xc00016ed58] [0xc00016e000 0xc00016ec60 0xc00016ed58] [0xc00016ec00 0xc00016ed48] [0x935700 0x935700] 0xc0028b41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:41:30.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:41:30.138: INFO: rc: 1 Jun 2 11:41:30.138: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c1e0 exit status 1 true [0xc00000e028 0xc0006a6020 0xc0006a6388] [0xc00000e028 0xc0006a6020 0xc0006a6388] [0xc0006a6008 0xc0006a6300] [0x935700 0x935700] 0xc0014ec3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:41:40.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:41:40.238: INFO: rc: 1 Jun 2 11:41:40.238: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c330 exit status 1 true [0xc0006a63a0 0xc0006a6468 0xc0006a65d0] [0xc0006a63a0 0xc0006a6468 0xc0006a65d0] [0xc0006a63c0 0xc0006a6548] [0x935700 0x935700] 0xc0014ec840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:41:50.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:41:50.334: INFO: rc: 1 Jun 2 11:41:50.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392300 exit status 1 true [0xc00016eda0 0xc00016efb8 0xc00016f018] [0xc00016eda0 0xc00016efb8 0xc00016f018] [0xc00016ef18 0xc00016efd8] [0x935700 0x935700] 0xc0028b4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:42:00.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:42:00.425: INFO: rc: 1 Jun 2 11:42:00.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392480 exit status 1 true [0xc00016f048 0xc00016f0c0 0xc00016f108] [0xc00016f048 0xc00016f0c0 0xc00016f108] [0xc00016f078 0xc00016f0f0] [0x935700 0x935700] 0xc0028b4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:42:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:42:10.512: INFO: rc: 1 Jun 2 11:42:10.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013925a0 exit status 1 true [0xc00016f120 0xc00016f170 0xc00016f198] [0xc00016f120 0xc00016f170 0xc00016f198] [0xc00016f168 0xc00016f188] [0x935700 0x935700] 0xc0028b4a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:42:20.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:42:20.608: INFO: rc: 1 Jun 2 11:42:20.608: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a150 exit status 1 true [0xc001cf2000 0xc001cf2018 0xc001cf2030] [0xc001cf2000 0xc001cf2018 0xc001cf2030] [0xc001cf2010 0xc001cf2028] [0x935700 0x935700] 0xc0018b0240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:42:30.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:42:30.708: INFO: rc: 1 Jun 2 11:42:30.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c5a0 exit status 1 true [0xc0006a65e0 0xc0006a66b8 0xc0006a67c0] [0xc0006a65e0 0xc0006a66b8 0xc0006a67c0] [0xc0006a6698 0xc0006a67a8] [0x935700 0x935700] 0xc0014eccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:42:40.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:42:40.803: INFO: rc: 1 Jun 2 11:42:40.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a2a0 exit status 1 true [0xc001cf2038 0xc001cf2050 0xc001cf2068] [0xc001cf2038 0xc001cf2050 0xc001cf2068] [0xc001cf2048 0xc001cf2060] [0x935700 0x935700] 0xc0018b07e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:42:50.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:42:50.899: INFO: rc: 1 Jun 2 11:42:50.899: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a3f0 exit status 1 true [0xc001cf2070 0xc001cf2088 0xc001cf20a0] [0xc001cf2070 0xc001cf2088 0xc001cf20a0] [0xc001cf2080 0xc001cf2098] [0x935700 0x935700] 0xc0018b0ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:43:00.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:43:00.991: INFO: rc: 1 Jun 2 11:43:00.991: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c6f0 exit status 1 true [0xc0006a6890 0xc0006a6a78 0xc0006a6c40] [0xc0006a6890 0xc0006a6a78 0xc0006a6c40] [0xc0006a6a30 0xc0006a6c38] [0x935700 0x935700] 0xc0014ed020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:43:10.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:43:11.087: INFO: rc: 1 Jun 2 11:43:11.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a420 exit status 1 true [0xc0006a6cb0 0xc0006a6e38 0xc0006a6fa8] [0xc0006a6cb0 0xc0006a6e38 0xc0006a6fa8] [0xc0006a6dc8 0xc0006a6f48] [0x935700 0x935700] 0xc0018b0c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:43:21.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:43:21.184: INFO: rc: 1 Jun 2 11:43:21.184: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c210 exit status 1 true [0xc00000e028 0xc0006a6020 0xc0006a6388] [0xc00000e028 0xc0006a6020 0xc0006a6388] [0xc0006a6008 0xc0006a6300] [0x935700 0x935700] 0xc0014ec3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:43:31.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:43:31.270: INFO: rc: 1 Jun 2 11:43:31.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392150 exit status 1 true [0xc00016e000 0xc00016ec60 0xc00016ed58] [0xc00016e000 0xc00016ec60 0xc00016ed58] [0xc00016ec00 0xc00016ed48] [0x935700 0x935700] 0xc0019e45a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:43:41.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:43:41.356: INFO: rc: 1 Jun 2 11:43:41.356: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c3f0 exit status 1 true [0xc0006a63a0 0xc0006a6468 0xc0006a65d0] [0xc0006a63a0 0xc0006a6468 0xc0006a65d0] [0xc0006a63c0 0xc0006a6548] [0x935700 0x935700] 0xc0014ec840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:43:51.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:43:51.442: INFO: rc: 1 Jun 2 11:43:51.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00138c600 exit status 1 true [0xc0006a65e0 0xc0006a66b8 0xc0006a67c0] [0xc0006a65e0 0xc0006a66b8 0xc0006a67c0] [0xc0006a6698 0xc0006a67a8] [0x935700 0x935700] 0xc0014eccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:44:01.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:44:01.536: INFO: rc: 1 Jun 2 11:44:01.536: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a0f0 exit status 1 true [0xc001cf2000 0xc001cf2018 0xc001cf2030] [0xc001cf2000 0xc001cf2018 0xc001cf2030] [0xc001cf2010 0xc001cf2028] [0x935700 0x935700] 0xc0028b41e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:44:11.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:44:11.636: INFO: rc: 1 Jun 2 11:44:11.636: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a2d0 exit status 1 true [0xc001cf2038 0xc001cf2050 0xc001cf2068] [0xc001cf2038 0xc001cf2050 0xc001cf2068] [0xc001cf2048 0xc001cf2060] [0x935700 0x935700] 0xc0028b4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:44:21.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:44:21.737: INFO: rc: 1 Jun 2 11:44:21.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a480 exit status 1 true [0xc001cf2070 0xc001cf2088 0xc001cf20a0] [0xc001cf2070 0xc001cf2088 0xc001cf20a0] [0xc001cf2080 0xc001cf2098] [0x935700 0x935700] 0xc0028b4720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:44:31.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:44:31.837: INFO: rc: 1 Jun 2 11:44:31.837: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a5d0 exit status 1 true [0xc001cf20a8 0xc001cf20c0 0xc001cf20d8] [0xc001cf20a8 0xc001cf20c0 0xc001cf20d8] [0xc001cf20b8 0xc001cf20d0] [0x935700 0x935700] 0xc0028b4a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:44:41.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:44:41.926: INFO: rc: 1 Jun 2 11:44:41.926: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392390 exit status 1 true [0xc00016eda0 0xc00016efb8 0xc00016f018] [0xc00016eda0 0xc00016efb8 0xc00016f018] [0xc00016ef18 0xc00016efd8] [0x935700 0x935700] 0xc0019e48a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:44:51.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:44:52.016: INFO: rc: 1 Jun 2 11:44:52.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392510 exit status 1 true [0xc00016f048 0xc00016f0c0 0xc00016f108] [0xc00016f048 0xc00016f0c0 0xc00016f108] [0xc00016f078 0xc00016f0f0] [0x935700 0x935700] 0xc0019e4b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:45:02.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:45:02.104: INFO: rc: 1 Jun 2 11:45:02.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000e5a750 exit status 1 true [0xc001cf20e0 0xc001cf20f8 0xc001cf2110] [0xc001cf20e0 0xc001cf20f8 0xc001cf2110] [0xc001cf20f0 0xc001cf2108] [0x935700 0x935700] 0xc0028b5200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:45:12.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:45:12.204: INFO: rc: 1 Jun 2 11:45:12.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001392120 exit status 1 true [0xc00000e208 0xc00016ec00 0xc00016ed48] [0xc00000e208 0xc00016ec00 0xc00016ed48] [0xc00016ebf0 0xc00016ecf0] [0x935700 0x935700] 0xc0019e43c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 2 11:45:22.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-88kvg ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:45:22.294: INFO: rc: 1 Jun 2 11:45:22.294: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jun 2 11:45:22.294: INFO: Scaling statefulset ss to 0 Jun 2 11:45:22.302: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 2 11:45:22.305: INFO: Deleting all statefulset in ns e2e-tests-statefulset-88kvg Jun 2 11:45:22.307: INFO: Scaling statefulset ss to 0 Jun 2 11:45:22.315: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 11:45:22.317: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:45:22.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-88kvg" for this suite. Jun 2 11:45:28.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:45:28.430: INFO: namespace: e2e-tests-statefulset-88kvg, resource: bindings, ignored listing per whitelist Jun 2 11:45:28.510: INFO: namespace e2e-tests-statefulset-88kvg deletion completed in 6.172283942s • [SLOW TEST:361.144 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:45:28.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 2 11:45:32.696: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-8e48e6cb-a4c6-11ea-889d-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-bfmk4", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bfmk4/pods/pod-submit-remove-8e48e6cb-a4c6-11ea-889d-0242ac110018", UID:"8e55f512-a4c6-11ea-99e8-0242ac110002", ResourceVersion:"13827502", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726695128, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"584376906"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6mhtx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020eabc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6mhtx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022d0628), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002545e00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d06e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022d0700)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0022d0708), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0022d070c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695128, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695132, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695132, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695128, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.225", StartTime:(*v1.Time)(0xc001aa9c20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001aa9c40), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://19e271e8df17ef65122330cb7522a103f8a258553e0119c28aef32d3e6f2d379"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 2 11:45:37.711: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:45:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bfmk4" for this suite. Jun 2 11:45:43.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:45:43.761: INFO: namespace: e2e-tests-pods-bfmk4, resource: bindings, ignored listing per whitelist Jun 2 11:45:43.852: INFO: namespace e2e-tests-pods-bfmk4 deletion completed in 6.132784585s • [SLOW TEST:15.342 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:45:43.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:45:43.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-kfxwb" to be "success or failure" Jun 2 11:45:44.004: INFO: Pod "downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.205115ms Jun 2 11:45:46.008: INFO: Pod "downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022210326s Jun 2 11:45:48.011: INFO: Pod "downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025052121s STEP: Saw pod success Jun 2 11:45:48.011: INFO: Pod "downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:45:48.013: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:45:48.055: INFO: Waiting for pod downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018 to disappear Jun 2 11:45:48.063: INFO: Pod downwardapi-volume-97762f58-a4c6-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:45:48.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kfxwb" for this suite. Jun 2 11:45:54.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:45:54.124: INFO: namespace: e2e-tests-projected-kfxwb, resource: bindings, ignored listing per whitelist Jun 2 11:45:54.162: INFO: namespace e2e-tests-projected-kfxwb deletion completed in 6.096898503s • [SLOW TEST:10.310 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:45:54.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-9d952794-a4c6-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 11:45:54.272: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-mvxdp" to be "success or failure" Jun 2 11:45:54.318: INFO: Pod "pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.756797ms Jun 2 11:45:56.322: INFO: Pod "pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049986049s Jun 2 11:45:58.326: INFO: Pod "pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054187769s STEP: Saw pod success Jun 2 11:45:58.327: INFO: Pod "pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:45:58.330: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 2 11:45:58.392: INFO: Waiting for pod pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018 to disappear Jun 2 11:45:58.432: INFO: Pod pod-projected-secrets-9d97698c-a4c6-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:45:58.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mvxdp" for this suite. Jun 2 11:46:04.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:46:04.587: INFO: namespace: e2e-tests-projected-mvxdp, resource: bindings, ignored listing per whitelist Jun 2 11:46:04.675: INFO: namespace e2e-tests-projected-mvxdp deletion completed in 6.238126646s • [SLOW TEST:10.513 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:46:04.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 11:46:04.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-dg8lh' Jun 2 11:46:06.920: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 2 11:46:06.920: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 2 11:46:06.985: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rn7pj] Jun 2 11:46:06.985: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rn7pj" in namespace "e2e-tests-kubectl-dg8lh" to be "running and ready" Jun 2 11:46:06.988: INFO: Pod "e2e-test-nginx-rc-rn7pj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304243ms Jun 2 11:46:08.992: INFO: Pod "e2e-test-nginx-rc-rn7pj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006699732s Jun 2 11:46:10.996: INFO: Pod "e2e-test-nginx-rc-rn7pj": Phase="Running", Reason="", readiness=true. Elapsed: 4.010807235s Jun 2 11:46:10.996: INFO: Pod "e2e-test-nginx-rc-rn7pj" satisfied condition "running and ready" Jun 2 11:46:10.996: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rn7pj] Jun 2 11:46:10.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dg8lh' Jun 2 11:46:11.127: INFO: stderr: "" Jun 2 11:46:11.127: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jun 2 11:46:11.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-dg8lh' Jun 2 11:46:11.290: INFO: stderr: "" Jun 2 11:46:11.290: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:46:11.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dg8lh" for this suite. Jun 2 11:46:17.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:46:17.391: INFO: namespace: e2e-tests-kubectl-dg8lh, resource: bindings, ignored listing per whitelist Jun 2 11:46:17.417: INFO: namespace e2e-tests-kubectl-dg8lh deletion completed in 6.12195961s • [SLOW TEST:12.741 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:46:17.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 2 11:46:17.561: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 2 11:46:17.582: INFO: Waiting for terminating namespaces to be deleted... Jun 2 11:46:17.604: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 2 11:46:17.610: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 2 11:46:17.610: INFO: Container kube-proxy ready: true, restart count 0 Jun 2 11:46:17.610: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 11:46:17.610: INFO: Container kindnet-cni ready: true, restart count 0 Jun 2 11:46:17.610: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 2 11:46:17.610: INFO: Container coredns ready: true, restart count 0 Jun 2 11:46:17.610: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 2 11:46:17.614: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 11:46:17.614: INFO: Container kindnet-cni ready: true, restart count 0 Jun 2 11:46:17.614: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 2 11:46:17.614: INFO: Container coredns ready: true, restart count 0 Jun 2 11:46:17.614: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 11:46:17.614: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1614b78b613102f6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:46:18.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xtjf5" for this suite. Jun 2 11:46:24.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:46:24.748: INFO: namespace: e2e-tests-sched-pred-xtjf5, resource: bindings, ignored listing per whitelist Jun 2 11:46:24.790: INFO: namespace e2e-tests-sched-pred-xtjf5 deletion completed in 6.153940778s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.373 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:46:24.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:46:24.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-qwlj6" to be "success or failure" Jun 2 11:46:24.894: INFO: Pod "downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094304ms Jun 2 11:46:26.897: INFO: Pod "downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009197905s Jun 2 11:46:28.902: INFO: Pod "downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014074872s STEP: Saw pod success Jun 2 11:46:28.902: INFO: Pod "downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:46:28.905: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:46:28.925: INFO: Waiting for pod downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018 to disappear Jun 2 11:46:28.930: INFO: Pod downwardapi-volume-afd4bb89-a4c6-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:46:28.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qwlj6" for this suite. Jun 2 11:46:34.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:46:34.995: INFO: namespace: e2e-tests-downward-api-qwlj6, resource: bindings, ignored listing per whitelist Jun 2 11:46:35.039: INFO: namespace e2e-tests-downward-api-qwlj6 deletion completed in 6.105333281s • [SLOW TEST:10.249 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:46:35.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 2 11:46:35.227: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:35.231: INFO: Number of nodes with available pods: 0 Jun 2 11:46:35.231: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:36.236: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:36.239: INFO: Number of nodes with available pods: 0 Jun 2 11:46:36.239: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:37.331: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:37.644: INFO: Number of nodes with available pods: 0 Jun 2 11:46:37.644: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:38.237: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:38.240: INFO: Number of nodes with available pods: 0 Jun 2 11:46:38.240: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:39.258: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:39.274: INFO: Number of nodes with available pods: 2 Jun 2 11:46:39.274: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 2 11:46:39.299: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:39.301: INFO: Number of nodes with available pods: 1 Jun 2 11:46:39.301: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:40.306: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:40.310: INFO: Number of nodes with available pods: 1 Jun 2 11:46:40.310: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:41.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:41.311: INFO: Number of nodes with available pods: 1 Jun 2 11:46:41.311: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:42.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:42.311: INFO: Number of nodes with available pods: 1 Jun 2 11:46:42.311: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:43.320: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:43.323: INFO: Number of nodes with available pods: 1 Jun 2 11:46:43.323: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:44.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:44.311: INFO: Number of nodes with available pods: 1 Jun 2 11:46:44.311: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:45.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:45.310: INFO: Number of nodes with available pods: 1 Jun 2 11:46:45.310: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:46.306: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:46.309: INFO: Number of nodes with available pods: 1 Jun 2 11:46:46.309: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:47.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:47.310: INFO: Number of nodes with available pods: 1 Jun 2 11:46:47.310: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:48.306: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:48.309: INFO: Number of nodes with available pods: 1 Jun 2 11:46:48.309: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:49.305: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:49.308: INFO: Number of nodes with available pods: 1 Jun 2 11:46:49.308: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:50.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:50.310: INFO: Number of nodes with available pods: 1 Jun 2 11:46:50.310: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:51.325: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:51.338: INFO: Number of nodes with available pods: 1 Jun 2 11:46:51.338: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:52.306: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:52.310: INFO: Number of nodes with available pods: 1 Jun 2 11:46:52.310: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:53.420: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:53.424: INFO: Number of nodes with available pods: 1 Jun 2 11:46:53.424: INFO: Node hunter-worker is running more than one daemon pod Jun 2 11:46:54.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 11:46:54.311: INFO: Number of nodes with available pods: 2 Jun 2 11:46:54.311: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-pdzzn, will wait for the garbage collector to delete the pods Jun 2 11:46:54.374: INFO: Deleting DaemonSet.extensions daemon-set took: 6.637389ms Jun 2 11:46:54.474: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.295086ms Jun 2 11:47:01.803: INFO: Number of nodes with available pods: 0 Jun 2 11:47:01.803: INFO: Number of running nodes: 0, number of available pods: 0 Jun 2 11:47:01.805: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-pdzzn/daemonsets","resourceVersion":"13827870"},"items":null} Jun 2 11:47:01.808: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-pdzzn/pods","resourceVersion":"13827870"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:47:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-pdzzn" for this suite. Jun 2 11:47:07.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:47:07.903: INFO: namespace: e2e-tests-daemonsets-pdzzn, resource: bindings, ignored listing per whitelist Jun 2 11:47:07.924: INFO: namespace e2e-tests-daemonsets-pdzzn deletion completed in 6.103748033s • [SLOW TEST:32.884 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:47:07.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:47:08.024: INFO: Creating ReplicaSet my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018 Jun 2 11:47:08.048: INFO: Pod name my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018: Found 0 pods out of 1 Jun 2 11:47:13.053: INFO: Pod name my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018: Found 1 pods out of 1 Jun 2 11:47:13.053: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018" is running Jun 2 11:47:13.057: INFO: Pod "my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018-tphb7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 11:47:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 11:47:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 11:47:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-02 11:47:08 +0000 UTC Reason: Message:}]) Jun 2 11:47:13.057: INFO: Trying to dial the pod Jun 2 11:47:18.070: INFO: Controller my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018: Got expected result from replica 1 [my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018-tphb7]: "my-hostname-basic-c98e542d-a4c6-11ea-889d-0242ac110018-tphb7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:47:18.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-6gj6l" for this suite. Jun 2 11:47:24.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:47:24.190: INFO: namespace: e2e-tests-replicaset-6gj6l, resource: bindings, ignored listing per whitelist Jun 2 11:47:24.205: INFO: namespace e2e-tests-replicaset-6gj6l deletion completed in 6.13147102s • [SLOW TEST:16.281 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:47:24.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:47:24.339: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.526659ms) Jun 2 11:47:24.343: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.584972ms) Jun 2 11:47:24.347: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.6477ms) Jun 2 11:47:24.351: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.077038ms) Jun 2 11:47:24.355: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.116748ms) Jun 2 11:47:24.358: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.340431ms) Jun 2 11:47:24.361: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.0884ms) Jun 2 11:47:24.364: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.481849ms) Jun 2 11:47:24.367: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.079989ms) Jun 2 11:47:24.370: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.927919ms) Jun 2 11:47:24.373: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.100303ms) Jun 2 11:47:24.377: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.343849ms) Jun 2 11:47:24.380: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.276965ms) Jun 2 11:47:24.415: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 34.863184ms) Jun 2 11:47:24.419: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.477504ms) Jun 2 11:47:24.423: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.834836ms) Jun 2 11:47:24.427: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.939502ms) Jun 2 11:47:24.430: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.286638ms) Jun 2 11:47:24.434: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.513499ms) Jun 2 11:47:24.438: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.802003ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:47:24.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-wfzz7" for this suite. Jun 2 11:47:30.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:47:30.486: INFO: namespace: e2e-tests-proxy-wfzz7, resource: bindings, ignored listing per whitelist Jun 2 11:47:30.540: INFO: namespace e2e-tests-proxy-wfzz7 deletion completed in 6.099183566s • [SLOW TEST:6.335 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:47:30.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jun 2 11:47:30.659: INFO: Waiting up to 5m0s for pod "var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018" in namespace "e2e-tests-var-expansion-lwjs9" to be "success or failure" Jun 2 11:47:30.681: INFO: Pod "var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.006579ms Jun 2 11:47:32.685: INFO: Pod "var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025913134s Jun 2 11:47:34.688: INFO: Pod "var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029120004s STEP: Saw pod success Jun 2 11:47:34.688: INFO: Pod "var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:47:34.691: INFO: Trying to get logs from node hunter-worker pod var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:47:34.800: INFO: Waiting for pod var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018 to disappear Jun 2 11:47:34.826: INFO: Pod var-expansion-d70911a3-a4c6-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:47:34.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-lwjs9" for this suite. Jun 2 11:47:40.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:47:40.902: INFO: namespace: e2e-tests-var-expansion-lwjs9, resource: bindings, ignored listing per whitelist Jun 2 11:47:40.932: INFO: namespace e2e-tests-var-expansion-lwjs9 deletion completed in 6.101337796s • [SLOW TEST:10.391 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:47:40.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-ddhbl Jun 2 11:47:45.115: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-ddhbl STEP: checking the pod's current state and verifying that restartCount is present Jun 2 11:47:45.119: INFO: Initial restart count of pod liveness-http is 0 Jun 2 11:48:03.200: INFO: Restart count of pod e2e-tests-container-probe-ddhbl/liveness-http is now 1 (18.080841464s elapsed) Jun 2 11:48:23.282: INFO: Restart count of pod e2e-tests-container-probe-ddhbl/liveness-http is now 2 (38.162788764s elapsed) Jun 2 11:48:43.338: INFO: Restart count of pod e2e-tests-container-probe-ddhbl/liveness-http is now 3 (58.218822446s elapsed) Jun 2 11:49:03.391: INFO: Restart count of pod e2e-tests-container-probe-ddhbl/liveness-http is now 4 (1m18.272056637s elapsed) Jun 2 11:50:07.521: INFO: Restart count of pod e2e-tests-container-probe-ddhbl/liveness-http is now 5 (2m22.40234968s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:50:07.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ddhbl" for this suite. Jun 2 11:50:13.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:50:13.628: INFO: namespace: e2e-tests-container-probe-ddhbl, resource: bindings, ignored listing per whitelist Jun 2 11:50:13.700: INFO: namespace e2e-tests-container-probe-ddhbl deletion completed in 6.148534121s • [SLOW TEST:152.767 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:50:13.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:50:13.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-8g8fb" to be "success or failure" Jun 2 11:50:13.820: INFO: Pod "downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.768519ms Jun 2 11:50:15.842: INFO: Pod "downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02451263s Jun 2 11:50:17.846: INFO: Pod "downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028707402s STEP: Saw pod success Jun 2 11:50:17.846: INFO: Pod "downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:50:17.850: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:50:17.876: INFO: Waiting for pod downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:50:17.887: INFO: Pod downwardapi-volume-3848d397-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:50:17.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8g8fb" for this suite. Jun 2 11:50:23.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:50:23.953: INFO: namespace: e2e-tests-projected-8g8fb, resource: bindings, ignored listing per whitelist Jun 2 11:50:23.987: INFO: namespace e2e-tests-projected-8g8fb deletion completed in 6.096612672s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:50:23.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jun 2 11:50:24.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-jsbd9 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 2 11:50:26.858: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0602 11:50:26.781656 2483 log.go:172] (0xc000138630) (0xc000702140) Create stream\nI0602 11:50:26.781699 2483 log.go:172] (0xc000138630) (0xc000702140) Stream added, broadcasting: 1\nI0602 11:50:26.783481 2483 log.go:172] (0xc000138630) Reply frame received for 1\nI0602 11:50:26.783508 2483 log.go:172] (0xc000138630) (0xc000717220) Create stream\nI0602 11:50:26.783515 2483 log.go:172] (0xc000138630) (0xc000717220) Stream added, broadcasting: 3\nI0602 11:50:26.784288 2483 log.go:172] (0xc000138630) Reply frame received for 3\nI0602 11:50:26.784337 2483 log.go:172] (0xc000138630) (0xc0005fcdc0) Create stream\nI0602 11:50:26.784353 2483 log.go:172] (0xc000138630) (0xc0005fcdc0) Stream added, broadcasting: 5\nI0602 11:50:26.785430 2483 log.go:172] (0xc000138630) Reply frame received for 5\nI0602 11:50:26.785453 2483 log.go:172] (0xc000138630) (0xc0007021e0) Create stream\nI0602 11:50:26.785459 2483 log.go:172] (0xc000138630) (0xc0007021e0) Stream added, broadcasting: 7\nI0602 11:50:26.786122 2483 log.go:172] (0xc000138630) Reply frame received for 7\nI0602 11:50:26.786217 2483 log.go:172] (0xc000717220) (3) Writing data frame\nI0602 11:50:26.786285 2483 log.go:172] (0xc000717220) (3) Writing data frame\nI0602 11:50:26.787180 2483 log.go:172] (0xc000138630) Data frame received for 5\nI0602 11:50:26.787199 2483 log.go:172] (0xc0005fcdc0) (5) Data frame handling\nI0602 11:50:26.787216 2483 log.go:172] (0xc0005fcdc0) (5) Data frame sent\nI0602 11:50:26.787682 2483 log.go:172] (0xc000138630) Data frame received for 5\nI0602 11:50:26.787699 2483 log.go:172] (0xc0005fcdc0) (5) Data frame handling\nI0602 11:50:26.787713 2483 log.go:172] (0xc0005fcdc0) (5) Data frame sent\nI0602 11:50:26.835963 2483 log.go:172] (0xc000138630) Data frame received for 7\nI0602 11:50:26.835990 2483 log.go:172] (0xc0007021e0) (7) Data frame handling\nI0602 11:50:26.836169 2483 log.go:172] (0xc000138630) Data frame received for 5\nI0602 11:50:26.836181 2483 log.go:172] (0xc0005fcdc0) (5) Data frame handling\nI0602 11:50:26.836618 2483 log.go:172] (0xc000138630) Data frame received for 1\nI0602 11:50:26.836633 2483 log.go:172] (0xc000702140) (1) Data frame handling\nI0602 11:50:26.836663 2483 log.go:172] (0xc000702140) (1) Data frame sent\nI0602 11:50:26.836793 2483 log.go:172] (0xc000138630) (0xc000702140) Stream removed, broadcasting: 1\nI0602 11:50:26.836850 2483 log.go:172] (0xc000138630) (0xc000702140) Stream removed, broadcasting: 1\nI0602 11:50:26.836861 2483 log.go:172] (0xc000138630) (0xc000717220) Stream removed, broadcasting: 3\nI0602 11:50:26.836875 2483 log.go:172] (0xc000138630) (0xc0005fcdc0) Stream removed, broadcasting: 5\nI0602 11:50:26.836982 2483 log.go:172] (0xc000138630) (0xc0007021e0) Stream removed, broadcasting: 7\nI0602 11:50:26.837650 2483 log.go:172] (0xc000138630) Go away received\n" Jun 2 11:50:26.859: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:50:28.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jsbd9" for this suite. Jun 2 11:50:34.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:50:34.903: INFO: namespace: e2e-tests-kubectl-jsbd9, resource: bindings, ignored listing per whitelist Jun 2 11:50:34.956: INFO: namespace e2e-tests-kubectl-jsbd9 deletion completed in 6.085669659s • [SLOW TEST:10.969 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:50:34.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-44f4d6ae-a4c7-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 11:50:35.099: INFO: Waiting up to 5m0s for pod "pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-dgtbt" to be "success or failure" Jun 2 11:50:35.103: INFO: Pod "pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.527598ms Jun 2 11:50:37.107: INFO: Pod "pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007679535s Jun 2 11:50:39.111: INFO: Pod "pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011539131s STEP: Saw pod success Jun 2 11:50:39.111: INFO: Pod "pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:50:39.114: INFO: Trying to get logs from node hunter-worker pod pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 11:50:39.156: INFO: Waiting for pod pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:50:39.169: INFO: Pod pod-secrets-44f99a24-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:50:39.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dgtbt" for this suite. Jun 2 11:50:45.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:50:45.263: INFO: namespace: e2e-tests-secrets-dgtbt, resource: bindings, ignored listing per whitelist Jun 2 11:50:45.263: INFO: namespace e2e-tests-secrets-dgtbt deletion completed in 6.090475355s • [SLOW TEST:10.307 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:50:45.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 2 11:50:45.383: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:50:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2sfmk" for this suite. Jun 2 11:51:14.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:51:14.819: INFO: namespace: e2e-tests-init-container-2sfmk, resource: bindings, ignored listing per whitelist Jun 2 11:51:14.841: INFO: namespace e2e-tests-init-container-2sfmk deletion completed in 22.093564309s • [SLOW TEST:29.578 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:51:14.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 2 11:51:15.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbk95,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbk95/configmaps/e2e-watch-test-watch-closed,UID:5cc0141e-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13828614,Generation:0,CreationTimestamp:2020-06-02 11:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 2 11:51:15.012: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbk95,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbk95/configmaps/e2e-watch-test-watch-closed,UID:5cc0141e-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13828615,Generation:0,CreationTimestamp:2020-06-02 11:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 2 11:51:15.022: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbk95,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbk95/configmaps/e2e-watch-test-watch-closed,UID:5cc0141e-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13828616,Generation:0,CreationTimestamp:2020-06-02 11:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 2 11:51:15.022: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-hbk95,SelfLink:/api/v1/namespaces/e2e-tests-watch-hbk95/configmaps/e2e-watch-test-watch-closed,UID:5cc0141e-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13828617,Generation:0,CreationTimestamp:2020-06-02 11:51:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:51:15.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-hbk95" for this suite. Jun 2 11:51:21.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:51:21.087: INFO: namespace: e2e-tests-watch-hbk95, resource: bindings, ignored listing per whitelist Jun 2 11:51:21.108: INFO: namespace e2e-tests-watch-hbk95 deletion completed in 6.081563216s • [SLOW TEST:6.266 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:51:21.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 2 11:51:21.333: INFO: Waiting up to 5m0s for pod "pod-6079ae8e-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-6wv8x" to be "success or failure" Jun 2 11:51:21.368: INFO: Pod "pod-6079ae8e-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.425858ms Jun 2 11:51:23.406: INFO: Pod "pod-6079ae8e-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072791217s Jun 2 11:51:25.412: INFO: Pod "pod-6079ae8e-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07875331s STEP: Saw pod success Jun 2 11:51:25.412: INFO: Pod "pod-6079ae8e-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:51:25.414: INFO: Trying to get logs from node hunter-worker pod pod-6079ae8e-a4c7-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:51:25.475: INFO: Waiting for pod pod-6079ae8e-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:51:25.479: INFO: Pod pod-6079ae8e-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:51:25.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6wv8x" for this suite. Jun 2 11:51:31.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:51:31.517: INFO: namespace: e2e-tests-emptydir-6wv8x, resource: bindings, ignored listing per whitelist Jun 2 11:51:31.574: INFO: namespace e2e-tests-emptydir-6wv8x deletion completed in 6.091095853s • [SLOW TEST:10.466 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:51:31.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 2 11:51:39.747: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:39.774: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 11:51:41.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:41.779: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 11:51:43.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:43.779: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 11:51:45.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:45.778: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 11:51:47.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:47.779: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 11:51:49.774: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:49.778: INFO: Pod pod-with-poststart-http-hook still exists Jun 2 11:51:51.775: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 2 11:51:51.779: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:51:51.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8cv4v" for this suite. Jun 2 11:52:13.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:52:13.895: INFO: namespace: e2e-tests-container-lifecycle-hook-8cv4v, resource: bindings, ignored listing per whitelist Jun 2 11:52:13.903: INFO: namespace e2e-tests-container-lifecycle-hook-8cv4v deletion completed in 22.119685857s • [SLOW TEST:42.329 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:52:13.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 2 11:52:14.030: INFO: Waiting up to 5m0s for pod "pod-7ff0720b-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-shljk" to be "success or failure" Jun 2 11:52:14.053: INFO: Pod "pod-7ff0720b-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.642605ms Jun 2 11:52:16.058: INFO: Pod "pod-7ff0720b-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028175554s Jun 2 11:52:18.062: INFO: Pod "pod-7ff0720b-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032296308s STEP: Saw pod success Jun 2 11:52:18.062: INFO: Pod "pod-7ff0720b-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:52:18.064: INFO: Trying to get logs from node hunter-worker2 pod pod-7ff0720b-a4c7-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:52:18.083: INFO: Waiting for pod pod-7ff0720b-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:52:18.142: INFO: Pod pod-7ff0720b-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:52:18.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-shljk" for this suite. Jun 2 11:52:24.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:52:24.274: INFO: namespace: e2e-tests-emptydir-shljk, resource: bindings, ignored listing per whitelist Jun 2 11:52:24.328: INFO: namespace e2e-tests-emptydir-shljk deletion completed in 6.181508987s • [SLOW TEST:10.424 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:52:24.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-86272a62-a4c7-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 11:52:24.461: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-r5k4k" to be "success or failure" Jun 2 11:52:24.483: INFO: Pod "pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.770949ms Jun 2 11:52:26.487: INFO: Pod "pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026121894s Jun 2 11:52:28.503: INFO: Pod "pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041474847s STEP: Saw pod success Jun 2 11:52:28.503: INFO: Pod "pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:52:28.506: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 11:52:28.559: INFO: Waiting for pod pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:52:28.658: INFO: Pod pod-projected-configmaps-8627a21a-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:52:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r5k4k" for this suite. Jun 2 11:52:34.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:52:34.747: INFO: namespace: e2e-tests-projected-r5k4k, resource: bindings, ignored listing per whitelist Jun 2 11:52:34.759: INFO: namespace e2e-tests-projected-r5k4k deletion completed in 6.097045867s • [SLOW TEST:10.431 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:52:34.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:52:34.987: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8c64ce08-a4c7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001c076ba), BlockOwnerDeletion:(*bool)(0xc001c076bb)}} Jun 2 11:52:35.066: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8c62117e-a4c7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001d86b02), BlockOwnerDeletion:(*bool)(0xc001d86b03)}} Jun 2 11:52:35.101: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8c62951d-a4c7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0024aeb9a), BlockOwnerDeletion:(*bool)(0xc0024aeb9b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:52:40.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-z9965" for this suite. Jun 2 11:52:46.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:52:46.181: INFO: namespace: e2e-tests-gc-z9965, resource: bindings, ignored listing per whitelist Jun 2 11:52:46.228: INFO: namespace e2e-tests-gc-z9965 deletion completed in 6.101043438s • [SLOW TEST:11.469 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:52:46.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:52:46.351: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 2 11:52:46.358: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 2 11:52:51.362: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 2 11:52:51.362: INFO: Creating deployment "test-rolling-update-deployment" Jun 2 11:52:51.366: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 2 11:52:51.378: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 2 11:52:53.384: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 2 11:52:53.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695571, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695571, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695571, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695571, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 11:52:55.390: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 2 11:52:55.397: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-vthqj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vthqj/deployments/test-rolling-update-deployment,UID:9633ab98-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829024,Generation:1,CreationTimestamp:2020-06-02 11:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-02 11:52:51 +0000 UTC 2020-06-02 11:52:51 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-02 11:52:55 +0000 UTC 2020-06-02 11:52:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 2 11:52:55.400: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-vthqj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vthqj/replicasets/test-rolling-update-deployment-75db98fb4c,UID:9636c329-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829015,Generation:1,CreationTimestamp:2020-06-02 11:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9633ab98-a4c7-11ea-99e8-0242ac110002 0xc002018207 0xc002018208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 2 11:52:55.400: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 2 11:52:55.400: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-vthqj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vthqj/replicasets/test-rolling-update-controller,UID:933717f4-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829023,Generation:2,CreationTimestamp:2020-06-02 11:52:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9633ab98-a4c7-11ea-99e8-0242ac110002 0xc00201811f 0xc002018130}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 2 11:52:55.402: INFO: Pod "test-rolling-update-deployment-75db98fb4c-m8jlp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-m8jlp,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-vthqj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vthqj/pods/test-rolling-update-deployment-75db98fb4c-m8jlp,UID:963777ac-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829014,Generation:0,CreationTimestamp:2020-06-02 11:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 9636c329-a4c7-11ea-99e8-0242ac110002 0xc002217207 0xc002217208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4k9jg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4k9jg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4k9jg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002217300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002217320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:52:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:52:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:52:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:52:51 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.67,StartTime:2020-06-02 11:52:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-02 11:52:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://557e89f30b0e8137d2a8217fb288705aad69bb90d67fd04508654f84e6b4d83b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:52:55.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vthqj" for this suite. Jun 2 11:53:01.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:53:01.686: INFO: namespace: e2e-tests-deployment-vthqj, resource: bindings, ignored listing per whitelist Jun 2 11:53:01.698: INFO: namespace e2e-tests-deployment-vthqj deletion completed in 6.293636747s • [SLOW TEST:15.470 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:53:01.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:53:01.863: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-gjv9p" to be "success or failure" Jun 2 11:53:01.887: INFO: Pod "downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.990855ms Jun 2 11:53:03.892: INFO: Pod "downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028783367s Jun 2 11:53:05.896: INFO: Pod "downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033096715s STEP: Saw pod success Jun 2 11:53:05.896: INFO: Pod "downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:53:05.899: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:53:05.933: INFO: Waiting for pod downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:53:05.945: INFO: Pod downwardapi-volume-9c74c05f-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:53:05.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gjv9p" for this suite. Jun 2 11:53:11.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:53:12.066: INFO: namespace: e2e-tests-downward-api-gjv9p, resource: bindings, ignored listing per whitelist Jun 2 11:53:12.086: INFO: namespace e2e-tests-downward-api-gjv9p deletion completed in 6.138338703s • [SLOW TEST:10.387 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:53:12.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 2 11:53:12.193: INFO: Waiting up to 5m0s for pod "pod-a29d0737-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-7vp7k" to be "success or failure" Jun 2 11:53:12.207: INFO: Pod "pod-a29d0737-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.862011ms Jun 2 11:53:14.234: INFO: Pod "pod-a29d0737-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040593226s Jun 2 11:53:16.342: INFO: Pod "pod-a29d0737-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149384172s STEP: Saw pod success Jun 2 11:53:16.342: INFO: Pod "pod-a29d0737-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:53:16.412: INFO: Trying to get logs from node hunter-worker2 pod pod-a29d0737-a4c7-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 11:53:16.696: INFO: Waiting for pod pod-a29d0737-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:53:16.705: INFO: Pod pod-a29d0737-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:53:16.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7vp7k" for this suite. Jun 2 11:53:22.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:53:22.733: INFO: namespace: e2e-tests-emptydir-7vp7k, resource: bindings, ignored listing per whitelist Jun 2 11:53:22.829: INFO: namespace e2e-tests-emptydir-7vp7k deletion completed in 6.1175311s • [SLOW TEST:10.743 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:53:22.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 11:53:22.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-x9vqg" to be "success or failure" Jun 2 11:53:23.055: INFO: Pod "downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 84.466087ms Jun 2 11:53:25.058: INFO: Pod "downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088019147s Jun 2 11:53:27.063: INFO: Pod "downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092461515s STEP: Saw pod success Jun 2 11:53:27.063: INFO: Pod "downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:53:27.065: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 11:53:27.121: INFO: Waiting for pod downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:53:27.130: INFO: Pod downwardapi-volume-a9099bfc-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:53:27.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x9vqg" for this suite. Jun 2 11:53:33.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:53:33.171: INFO: namespace: e2e-tests-downward-api-x9vqg, resource: bindings, ignored listing per whitelist Jun 2 11:53:33.224: INFO: namespace e2e-tests-downward-api-x9vqg deletion completed in 6.089033909s • [SLOW TEST:10.394 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:53:33.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 2 11:53:33.355: INFO: Waiting up to 5m0s for pod "downward-api-af366220-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-tzsrr" to be "success or failure" Jun 2 11:53:33.384: INFO: Pod "downward-api-af366220-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.877322ms Jun 2 11:53:35.387: INFO: Pod "downward-api-af366220-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032339013s Jun 2 11:53:37.390: INFO: Pod "downward-api-af366220-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035394534s STEP: Saw pod success Jun 2 11:53:37.391: INFO: Pod "downward-api-af366220-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:53:37.392: INFO: Trying to get logs from node hunter-worker2 pod downward-api-af366220-a4c7-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 11:53:37.521: INFO: Waiting for pod downward-api-af366220-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:53:37.605: INFO: Pod downward-api-af366220-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:53:37.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tzsrr" for this suite. Jun 2 11:53:43.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:53:43.663: INFO: namespace: e2e-tests-downward-api-tzsrr, resource: bindings, ignored listing per whitelist Jun 2 11:53:43.701: INFO: namespace e2e-tests-downward-api-tzsrr deletion completed in 6.093177314s • [SLOW TEST:10.477 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:53:43.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:53:43.842: INFO: Creating deployment "test-recreate-deployment" Jun 2 11:53:43.856: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 2 11:53:43.864: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jun 2 11:53:45.910: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 2 11:53:45.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695623, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695623, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695623, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726695623, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 11:53:47.916: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 2 11:53:47.922: INFO: Updating deployment test-recreate-deployment Jun 2 11:53:47.922: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 2 11:53:48.170: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-p8znx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p8znx/deployments/test-recreate-deployment,UID:b57b6e2a-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829289,Generation:2,CreationTimestamp:2020-06-02 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-02 11:53:48 +0000 UTC 2020-06-02 11:53:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-02 11:53:48 +0000 UTC 2020-06-02 11:53:43 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 2 11:53:48.186: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-p8znx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p8znx/replicasets/test-recreate-deployment-589c4bfd,UID:b7fb91da-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829288,Generation:1,CreationTimestamp:2020-06-02 11:53:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b57b6e2a-a4c7-11ea-99e8-0242ac110002 0xc0022e571f 0xc0022e5730}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 2 11:53:48.186: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 2 11:53:48.186: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-p8znx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p8znx/replicasets/test-recreate-deployment-5bf7f65dc,UID:b57ea55b-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829278,Generation:2,CreationTimestamp:2020-06-02 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b57b6e2a-a4c7-11ea-99e8-0242ac110002 0xc0022e5a00 0xc0022e5a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 2 11:53:48.190: INFO: Pod "test-recreate-deployment-589c4bfd-p5wh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-p5wh8,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-p8znx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p8znx/pods/test-recreate-deployment-589c4bfd-p5wh8,UID:b7fe1b8f-a4c7-11ea-99e8-0242ac110002,ResourceVersion:13829290,Generation:0,CreationTimestamp:2020-06-02 11:53:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd b7fb91da-a4c7-11ea-99e8-0242ac110002 0xc002856d3f 0xc002856d50}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jbkmj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jbkmj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jbkmj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002856dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002856de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:53:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:53:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:53:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 11:53:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-02 11:53:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:53:48.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-p8znx" for this suite. Jun 2 11:53:54.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:53:54.350: INFO: namespace: e2e-tests-deployment-p8znx, resource: bindings, ignored listing per whitelist Jun 2 11:53:54.457: INFO: namespace e2e-tests-deployment-p8znx deletion completed in 6.264356363s • [SLOW TEST:10.756 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:53:54.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-wkkx6.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-wkkx6.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wkkx6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-wkkx6.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-wkkx6.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wkkx6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 2 11:54:00.745: INFO: DNS probes using e2e-tests-dns-wkkx6/dns-test-bbe125e0-a4c7-11ea-889d-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:54:00.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-wkkx6" for this suite. Jun 2 11:54:06.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:54:06.888: INFO: namespace: e2e-tests-dns-wkkx6, resource: bindings, ignored listing per whitelist Jun 2 11:54:06.912: INFO: namespace e2e-tests-dns-wkkx6 deletion completed in 6.120589331s • [SLOW TEST:12.455 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:54:06.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 11:54:11.187: INFO: Waiting up to 5m0s for pod "client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018" in namespace "e2e-tests-pods-t2dzq" to be "success or failure" Jun 2 11:54:11.223: INFO: Pod "client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.223099ms Jun 2 11:54:13.227: INFO: Pod "client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039997901s Jun 2 11:54:15.230: INFO: Pod "client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043684313s STEP: Saw pod success Jun 2 11:54:15.230: INFO: Pod "client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:54:15.233: INFO: Trying to get logs from node hunter-worker pod client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018 container env3cont: STEP: delete the pod Jun 2 11:54:15.272: INFO: Waiting for pod client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018 to disappear Jun 2 11:54:15.282: INFO: Pod client-envvars-c5bc2972-a4c7-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:54:15.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-t2dzq" for this suite. Jun 2 11:55:05.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:55:05.343: INFO: namespace: e2e-tests-pods-t2dzq, resource: bindings, ignored listing per whitelist Jun 2 11:55:05.377: INFO: namespace e2e-tests-pods-t2dzq deletion completed in 50.090832926s • [SLOW TEST:58.464 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:55:05.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:55:12.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-vnssf" for this suite. Jun 2 11:55:34.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:55:34.593: INFO: namespace: e2e-tests-replication-controller-vnssf, resource: bindings, ignored listing per whitelist Jun 2 11:55:34.614: INFO: namespace e2e-tests-replication-controller-vnssf deletion completed in 22.093604014s • [SLOW TEST:29.237 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:55:34.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jun 2 11:55:34.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:34.983: INFO: stderr: "" Jun 2 11:55:34.983: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 2 11:55:34.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:35.140: INFO: stderr: "" Jun 2 11:55:35.140: INFO: stdout: "update-demo-nautilus-lnkqp update-demo-nautilus-vk2vq " Jun 2 11:55:35.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnkqp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:35.255: INFO: stderr: "" Jun 2 11:55:35.255: INFO: stdout: "" Jun 2 11:55:35.255: INFO: update-demo-nautilus-lnkqp is created but not running Jun 2 11:55:40.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:40.373: INFO: stderr: "" Jun 2 11:55:40.373: INFO: stdout: "update-demo-nautilus-lnkqp update-demo-nautilus-vk2vq " Jun 2 11:55:40.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnkqp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:40.476: INFO: stderr: "" Jun 2 11:55:40.476: INFO: stdout: "true" Jun 2 11:55:40.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lnkqp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:40.586: INFO: stderr: "" Jun 2 11:55:40.586: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:55:40.586: INFO: validating pod update-demo-nautilus-lnkqp Jun 2 11:55:40.590: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:55:40.590: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:55:40.590: INFO: update-demo-nautilus-lnkqp is verified up and running Jun 2 11:55:40.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk2vq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:40.701: INFO: stderr: "" Jun 2 11:55:40.701: INFO: stdout: "true" Jun 2 11:55:40.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk2vq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:55:40.812: INFO: stderr: "" Jun 2 11:55:40.812: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 2 11:55:40.812: INFO: validating pod update-demo-nautilus-vk2vq Jun 2 11:55:40.817: INFO: got data: { "image": "nautilus.jpg" } Jun 2 11:55:40.817: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 2 11:55:40.817: INFO: update-demo-nautilus-vk2vq is verified up and running STEP: rolling-update to new replication controller Jun 2 11:55:40.819: INFO: scanned /root for discovery docs: Jun 2 11:55:40.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:56:03.512: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 2 11:56:03.512: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 2 11:56:03.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:56:03.604: INFO: stderr: "" Jun 2 11:56:03.604: INFO: stdout: "update-demo-kitten-t7v4s update-demo-kitten-vfd22 " Jun 2 11:56:03.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t7v4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:56:03.706: INFO: stderr: "" Jun 2 11:56:03.706: INFO: stdout: "true" Jun 2 11:56:03.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t7v4s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:56:03.805: INFO: stderr: "" Jun 2 11:56:03.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 2 11:56:03.805: INFO: validating pod update-demo-kitten-t7v4s Jun 2 11:56:03.822: INFO: got data: { "image": "kitten.jpg" } Jun 2 11:56:03.822: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 2 11:56:03.822: INFO: update-demo-kitten-t7v4s is verified up and running Jun 2 11:56:03.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vfd22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:56:03.929: INFO: stderr: "" Jun 2 11:56:03.929: INFO: stdout: "true" Jun 2 11:56:03.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vfd22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zv4m4' Jun 2 11:56:04.044: INFO: stderr: "" Jun 2 11:56:04.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 2 11:56:04.044: INFO: validating pod update-demo-kitten-vfd22 Jun 2 11:56:04.055: INFO: got data: { "image": "kitten.jpg" } Jun 2 11:56:04.055: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 2 11:56:04.055: INFO: update-demo-kitten-vfd22 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:56:04.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zv4m4" for this suite. Jun 2 11:56:28.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:56:28.103: INFO: namespace: e2e-tests-kubectl-zv4m4, resource: bindings, ignored listing per whitelist Jun 2 11:56:28.182: INFO: namespace e2e-tests-kubectl-zv4m4 deletion completed in 24.122886209s • [SLOW TEST:53.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:56:28.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jun 2 11:56:28.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 2 11:56:28.549: INFO: stderr: "" Jun 2 11:56:28.549: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:56:28.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mj479" for this suite. Jun 2 11:56:34.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:56:34.633: INFO: namespace: e2e-tests-kubectl-mj479, resource: bindings, ignored listing per whitelist Jun 2 11:56:34.653: INFO: namespace e2e-tests-kubectl-mj479 deletion completed in 6.1003801s • [SLOW TEST:6.471 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:56:34.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-1b5cb587-a4c8-11ea-889d-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-1b5cb55b-a4c8-11ea-889d-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 2 11:56:34.798: INFO: Waiting up to 5m0s for pod "projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-9s8js" to be "success or failure" Jun 2 11:56:34.816: INFO: Pod "projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.215493ms Jun 2 11:56:36.820: INFO: Pod "projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022763941s Jun 2 11:56:38.825: INFO: Pod "projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026941873s STEP: Saw pod success Jun 2 11:56:38.825: INFO: Pod "projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 11:56:38.828: INFO: Trying to get logs from node hunter-worker pod projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018 container projected-all-volume-test: STEP: delete the pod Jun 2 11:56:38.846: INFO: Waiting for pod projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018 to disappear Jun 2 11:56:38.862: INFO: Pod projected-volume-1b5cb4f6-a4c8-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:56:38.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9s8js" for this suite. Jun 2 11:56:44.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:56:44.973: INFO: namespace: e2e-tests-projected-9s8js, resource: bindings, ignored listing per whitelist Jun 2 11:56:45.046: INFO: namespace e2e-tests-projected-9s8js deletion completed in 6.181017195s • [SLOW TEST:10.393 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:56:45.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-jkxn STEP: Creating a pod to test atomic-volume-subpath Jun 2 11:56:45.805: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jkxn" in namespace "e2e-tests-subpath-rsqh7" to be "success or failure" Jun 2 11:56:45.854: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Pending", Reason="", readiness=false. Elapsed: 49.394217ms Jun 2 11:56:47.895: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089865957s Jun 2 11:56:49.920: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1151928s Jun 2 11:56:51.924: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=true. Elapsed: 6.118764186s Jun 2 11:56:53.969: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 8.164007524s Jun 2 11:56:55.973: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 10.167639181s Jun 2 11:56:57.977: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 12.172038502s Jun 2 11:56:59.981: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 14.176493982s Jun 2 11:57:02.016: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 16.21145359s Jun 2 11:57:04.020: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 18.215468798s Jun 2 11:57:06.025: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 20.21980373s Jun 2 11:57:08.046: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 22.241547663s Jun 2 11:57:10.051: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Running", Reason="", readiness=false. Elapsed: 24.245742037s Jun 2 11:57:12.055: INFO: Pod "pod-subpath-test-configmap-jkxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.250227513s STEP: Saw pod success Jun 2 11:57:12.055: INFO: Pod "pod-subpath-test-configmap-jkxn" satisfied condition "success or failure" Jun 2 11:57:12.059: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-jkxn container test-container-subpath-configmap-jkxn: STEP: delete the pod Jun 2 11:57:12.145: INFO: Waiting for pod pod-subpath-test-configmap-jkxn to disappear Jun 2 11:57:12.156: INFO: Pod pod-subpath-test-configmap-jkxn no longer exists STEP: Deleting pod pod-subpath-test-configmap-jkxn Jun 2 11:57:12.156: INFO: Deleting pod "pod-subpath-test-configmap-jkxn" in namespace "e2e-tests-subpath-rsqh7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:57:12.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rsqh7" for this suite. Jun 2 11:57:18.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 11:57:18.197: INFO: namespace: e2e-tests-subpath-rsqh7, resource: bindings, ignored listing per whitelist Jun 2 11:57:18.275: INFO: namespace e2e-tests-subpath-rsqh7 deletion completed in 6.112090049s • [SLOW TEST:33.228 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 11:57:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8fnnw [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jun 2 11:57:18.495: INFO: Found 0 stateful pods, waiting for 3 Jun 2 11:57:28.519: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:57:28.519: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:57:28.519: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 2 11:57:38.501: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:57:38.501: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:57:38.501: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 2 11:57:38.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8fnnw ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 11:57:38.774: INFO: stderr: "I0602 11:57:38.646234 2841 log.go:172] (0xc0008002c0) (0xc000700640) Create stream\nI0602 11:57:38.646332 2841 log.go:172] (0xc0008002c0) (0xc000700640) Stream added, broadcasting: 1\nI0602 11:57:38.648766 2841 log.go:172] (0xc0008002c0) Reply frame received for 1\nI0602 11:57:38.648795 2841 log.go:172] (0xc0008002c0) (0xc0007006e0) Create stream\nI0602 11:57:38.648809 2841 log.go:172] (0xc0008002c0) (0xc0007006e0) Stream added, broadcasting: 3\nI0602 11:57:38.649932 2841 log.go:172] (0xc0008002c0) Reply frame received for 3\nI0602 11:57:38.649970 2841 log.go:172] (0xc0008002c0) (0xc000700780) Create stream\nI0602 11:57:38.649986 2841 log.go:172] (0xc0008002c0) (0xc000700780) Stream added, broadcasting: 5\nI0602 11:57:38.650904 2841 log.go:172] (0xc0008002c0) Reply frame received for 5\nI0602 11:57:38.766578 2841 log.go:172] (0xc0008002c0) Data frame received for 3\nI0602 11:57:38.766611 2841 log.go:172] (0xc0007006e0) (3) Data frame handling\nI0602 11:57:38.766626 2841 log.go:172] (0xc0007006e0) (3) Data frame sent\nI0602 11:57:38.766639 2841 log.go:172] (0xc0008002c0) Data frame received for 3\nI0602 11:57:38.766645 2841 log.go:172] (0xc0007006e0) (3) Data frame handling\nI0602 11:57:38.766888 2841 log.go:172] (0xc0008002c0) Data frame received for 5\nI0602 11:57:38.766937 2841 log.go:172] (0xc000700780) (5) Data frame handling\nI0602 11:57:38.768912 2841 log.go:172] (0xc0008002c0) Data frame received for 1\nI0602 11:57:38.768940 2841 log.go:172] (0xc000700640) (1) Data frame handling\nI0602 11:57:38.768962 2841 log.go:172] (0xc000700640) (1) Data frame sent\nI0602 11:57:38.768981 2841 log.go:172] (0xc0008002c0) (0xc000700640) Stream removed, broadcasting: 1\nI0602 11:57:38.769378 2841 log.go:172] (0xc0008002c0) Go away received\nI0602 11:57:38.769425 2841 log.go:172] (0xc0008002c0) (0xc000700640) Stream removed, broadcasting: 1\nI0602 11:57:38.769456 2841 log.go:172] (0xc0008002c0) (0xc0007006e0) Stream removed, broadcasting: 3\nI0602 11:57:38.769474 2841 log.go:172] (0xc0008002c0) (0xc000700780) Stream removed, broadcasting: 5\n" Jun 2 11:57:38.775: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 11:57:38.775: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 2 11:57:48.830: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 2 11:57:58.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8fnnw ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:57:59.099: INFO: stderr: "I0602 11:57:59.022349 2862 log.go:172] (0xc0006f2370) (0xc00068e640) Create stream\nI0602 11:57:59.022412 2862 log.go:172] (0xc0006f2370) (0xc00068e640) Stream added, broadcasting: 1\nI0602 11:57:59.024653 2862 log.go:172] (0xc0006f2370) Reply frame received for 1\nI0602 11:57:59.024687 2862 log.go:172] (0xc0006f2370) (0xc000532e60) Create stream\nI0602 11:57:59.024701 2862 log.go:172] (0xc0006f2370) (0xc000532e60) Stream added, broadcasting: 3\nI0602 11:57:59.025954 2862 log.go:172] (0xc0006f2370) Reply frame received for 3\nI0602 11:57:59.026001 2862 log.go:172] (0xc0006f2370) (0xc0004fa000) Create stream\nI0602 11:57:59.026027 2862 log.go:172] (0xc0006f2370) (0xc0004fa000) Stream added, broadcasting: 5\nI0602 11:57:59.026885 2862 log.go:172] (0xc0006f2370) Reply frame received for 5\nI0602 11:57:59.094662 2862 log.go:172] (0xc0006f2370) Data frame received for 3\nI0602 11:57:59.094704 2862 log.go:172] (0xc000532e60) (3) Data frame handling\nI0602 11:57:59.094722 2862 log.go:172] (0xc000532e60) (3) Data frame sent\nI0602 11:57:59.094732 2862 log.go:172] (0xc0006f2370) Data frame received for 3\nI0602 11:57:59.094736 2862 log.go:172] (0xc000532e60) (3) Data frame handling\nI0602 11:57:59.094774 2862 log.go:172] (0xc0006f2370) Data frame received for 5\nI0602 11:57:59.094793 2862 log.go:172] (0xc0004fa000) (5) Data frame handling\nI0602 11:57:59.095683 2862 log.go:172] (0xc0006f2370) Data frame received for 1\nI0602 11:57:59.095789 2862 log.go:172] (0xc00068e640) (1) Data frame handling\nI0602 11:57:59.095812 2862 log.go:172] (0xc00068e640) (1) Data frame sent\nI0602 11:57:59.095821 2862 log.go:172] (0xc0006f2370) (0xc00068e640) Stream removed, broadcasting: 1\nI0602 11:57:59.095836 2862 log.go:172] (0xc0006f2370) Go away received\nI0602 11:57:59.095997 2862 log.go:172] (0xc0006f2370) (0xc00068e640) Stream removed, broadcasting: 1\nI0602 11:57:59.096016 2862 log.go:172] (0xc0006f2370) (0xc000532e60) Stream removed, broadcasting: 3\nI0602 11:57:59.096028 2862 log.go:172] (0xc0006f2370) (0xc0004fa000) Stream removed, broadcasting: 5\n" Jun 2 11:57:59.099: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 11:57:59.099: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 11:58:09.121: INFO: Waiting for StatefulSet e2e-tests-statefulset-8fnnw/ss2 to complete update Jun 2 11:58:09.121: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 2 11:58:09.121: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 2 11:58:09.121: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 2 11:58:19.134: INFO: Waiting for StatefulSet e2e-tests-statefulset-8fnnw/ss2 to complete update Jun 2 11:58:19.134: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 2 11:58:29.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8fnnw ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 11:58:29.429: INFO: stderr: "I0602 11:58:29.265618 2885 log.go:172] (0xc0007b6160) (0xc00071c640) Create stream\nI0602 11:58:29.265673 2885 log.go:172] (0xc0007b6160) (0xc00071c640) Stream added, broadcasting: 1\nI0602 11:58:29.268209 2885 log.go:172] (0xc0007b6160) Reply frame received for 1\nI0602 11:58:29.268267 2885 log.go:172] (0xc0007b6160) (0xc000610c80) Create stream\nI0602 11:58:29.268279 2885 log.go:172] (0xc0007b6160) (0xc000610c80) Stream added, broadcasting: 3\nI0602 11:58:29.269341 2885 log.go:172] (0xc0007b6160) Reply frame received for 3\nI0602 11:58:29.269375 2885 log.go:172] (0xc0007b6160) (0xc00071c6e0) Create stream\nI0602 11:58:29.269394 2885 log.go:172] (0xc0007b6160) (0xc00071c6e0) Stream added, broadcasting: 5\nI0602 11:58:29.270361 2885 log.go:172] (0xc0007b6160) Reply frame received for 5\nI0602 11:58:29.421390 2885 log.go:172] (0xc0007b6160) Data frame received for 3\nI0602 11:58:29.421543 2885 log.go:172] (0xc000610c80) (3) Data frame handling\nI0602 11:58:29.421628 2885 log.go:172] (0xc000610c80) (3) Data frame sent\nI0602 11:58:29.421850 2885 log.go:172] (0xc0007b6160) Data frame received for 3\nI0602 11:58:29.421888 2885 log.go:172] (0xc000610c80) (3) Data frame handling\nI0602 11:58:29.421922 2885 log.go:172] (0xc0007b6160) Data frame received for 5\nI0602 11:58:29.421939 2885 log.go:172] (0xc00071c6e0) (5) Data frame handling\nI0602 11:58:29.423401 2885 log.go:172] (0xc0007b6160) Data frame received for 1\nI0602 11:58:29.423429 2885 log.go:172] (0xc00071c640) (1) Data frame handling\nI0602 11:58:29.423446 2885 log.go:172] (0xc00071c640) (1) Data frame sent\nI0602 11:58:29.423473 2885 log.go:172] (0xc0007b6160) (0xc00071c640) Stream removed, broadcasting: 1\nI0602 11:58:29.423486 2885 log.go:172] (0xc0007b6160) Go away received\nI0602 11:58:29.423733 2885 log.go:172] (0xc0007b6160) (0xc00071c640) Stream removed, broadcasting: 1\nI0602 11:58:29.423774 2885 log.go:172] (0xc0007b6160) (0xc000610c80) Stream removed, broadcasting: 3\nI0602 11:58:29.423805 2885 log.go:172] (0xc0007b6160) (0xc00071c6e0) Stream removed, broadcasting: 5\n" Jun 2 11:58:29.430: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 11:58:29.430: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 11:58:39.463: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 2 11:58:49.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-8fnnw ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 11:58:49.715: INFO: stderr: "I0602 11:58:49.621772 2907 log.go:172] (0xc000148840) (0xc0004912c0) Create stream\nI0602 11:58:49.621846 2907 log.go:172] (0xc000148840) (0xc0004912c0) Stream added, broadcasting: 1\nI0602 11:58:49.624315 2907 log.go:172] (0xc000148840) Reply frame received for 1\nI0602 11:58:49.624349 2907 log.go:172] (0xc000148840) (0xc000491360) Create stream\nI0602 11:58:49.624356 2907 log.go:172] (0xc000148840) (0xc000491360) Stream added, broadcasting: 3\nI0602 11:58:49.625512 2907 log.go:172] (0xc000148840) Reply frame received for 3\nI0602 11:58:49.625557 2907 log.go:172] (0xc000148840) (0xc00053e000) Create stream\nI0602 11:58:49.625571 2907 log.go:172] (0xc000148840) (0xc00053e000) Stream added, broadcasting: 5\nI0602 11:58:49.626511 2907 log.go:172] (0xc000148840) Reply frame received for 5\nI0602 11:58:49.707331 2907 log.go:172] (0xc000148840) Data frame received for 5\nI0602 11:58:49.707368 2907 log.go:172] (0xc00053e000) (5) Data frame handling\nI0602 11:58:49.707402 2907 log.go:172] (0xc000148840) Data frame received for 3\nI0602 11:58:49.707420 2907 log.go:172] (0xc000491360) (3) Data frame handling\nI0602 11:58:49.707440 2907 log.go:172] (0xc000491360) (3) Data frame sent\nI0602 11:58:49.707465 2907 log.go:172] (0xc000148840) Data frame received for 3\nI0602 11:58:49.707473 2907 log.go:172] (0xc000491360) (3) Data frame handling\nI0602 11:58:49.709407 2907 log.go:172] (0xc000148840) Data frame received for 1\nI0602 11:58:49.709453 2907 log.go:172] (0xc0004912c0) (1) Data frame handling\nI0602 11:58:49.709499 2907 log.go:172] (0xc0004912c0) (1) Data frame sent\nI0602 11:58:49.709531 2907 log.go:172] (0xc000148840) (0xc0004912c0) Stream removed, broadcasting: 1\nI0602 11:58:49.709713 2907 log.go:172] (0xc000148840) (0xc0004912c0) Stream removed, broadcasting: 1\nI0602 11:58:49.709732 2907 log.go:172] (0xc000148840) (0xc000491360) Stream removed, broadcasting: 3\nI0602 11:58:49.709858 2907 log.go:172] (0xc000148840) Go away received\nI0602 11:58:49.709913 2907 log.go:172] (0xc000148840) (0xc00053e000) Stream removed, broadcasting: 5\n" Jun 2 11:58:49.715: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 11:58:49.715: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 11:58:59.734: INFO: Waiting for StatefulSet e2e-tests-statefulset-8fnnw/ss2 to complete update Jun 2 11:58:59.734: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 2 11:58:59.734: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 2 11:58:59.734: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 2 11:59:09.743: INFO: Waiting for StatefulSet e2e-tests-statefulset-8fnnw/ss2 to complete update Jun 2 11:59:09.743: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 2 11:59:09.743: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 2 11:59:19.743: INFO: Waiting for StatefulSet e2e-tests-statefulset-8fnnw/ss2 to complete update Jun 2 11:59:19.743: INFO: Waiting for Pod e2e-tests-statefulset-8fnnw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 2 11:59:29.743: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8fnnw Jun 2 11:59:29.746: INFO: Scaling statefulset ss2 to 0 Jun 2 11:59:59.765: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 11:59:59.768: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 11:59:59.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8fnnw" for this suite. Jun 2 12:00:07.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:00:07.841: INFO: namespace: e2e-tests-statefulset-8fnnw, resource: bindings, ignored listing per whitelist Jun 2 12:00:07.902: INFO: namespace e2e-tests-statefulset-8fnnw deletion completed in 8.113952085s • [SLOW TEST:169.627 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:00:07.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 2 12:00:08.888: INFO: Pod name wrapped-volume-race-9aef92bd-a4c8-11ea-889d-0242ac110018: Found 0 pods out of 5 Jun 2 12:00:13.897: INFO: Pod name wrapped-volume-race-9aef92bd-a4c8-11ea-889d-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9aef92bd-a4c8-11ea-889d-0242ac110018 in namespace e2e-tests-emptydir-wrapper-b69fp, will wait for the garbage collector to delete the pods Jun 2 12:02:38.006: INFO: Deleting ReplicationController wrapped-volume-race-9aef92bd-a4c8-11ea-889d-0242ac110018 took: 8.840747ms Jun 2 12:02:38.107: INFO: Terminating ReplicationController wrapped-volume-race-9aef92bd-a4c8-11ea-889d-0242ac110018 pods took: 100.227533ms STEP: Creating RC which spawns configmap-volume pods Jun 2 12:03:22.353: INFO: Pod name wrapped-volume-race-0e4604f0-a4c9-11ea-889d-0242ac110018: Found 0 pods out of 5 Jun 2 12:03:27.359: INFO: Pod name wrapped-volume-race-0e4604f0-a4c9-11ea-889d-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0e4604f0-a4c9-11ea-889d-0242ac110018 in namespace e2e-tests-emptydir-wrapper-b69fp, will wait for the garbage collector to delete the pods Jun 2 12:05:43.444: INFO: Deleting ReplicationController wrapped-volume-race-0e4604f0-a4c9-11ea-889d-0242ac110018 took: 7.68991ms Jun 2 12:05:43.544: INFO: Terminating ReplicationController wrapped-volume-race-0e4604f0-a4c9-11ea-889d-0242ac110018 pods took: 100.252315ms STEP: Creating RC which spawns configmap-volume pods Jun 2 12:06:21.385: INFO: Pod name wrapped-volume-race-78fcf8b9-a4c9-11ea-889d-0242ac110018: Found 0 pods out of 5 Jun 2 12:06:26.402: INFO: Pod name wrapped-volume-race-78fcf8b9-a4c9-11ea-889d-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-78fcf8b9-a4c9-11ea-889d-0242ac110018 in namespace e2e-tests-emptydir-wrapper-b69fp, will wait for the garbage collector to delete the pods Jun 2 12:09:00.487: INFO: Deleting ReplicationController wrapped-volume-race-78fcf8b9-a4c9-11ea-889d-0242ac110018 took: 8.451945ms Jun 2 12:09:00.588: INFO: Terminating ReplicationController wrapped-volume-race-78fcf8b9-a4c9-11ea-889d-0242ac110018 pods took: 100.302875ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:09:43.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-b69fp" for this suite. Jun 2 12:09:51.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:09:51.107: INFO: namespace: e2e-tests-emptydir-wrapper-b69fp, resource: bindings, ignored listing per whitelist Jun 2 12:09:51.157: INFO: namespace e2e-tests-emptydir-wrapper-b69fp deletion completed in 8.136313553s • [SLOW TEST:583.255 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:09:51.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f618413c-a4c9-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 12:09:51.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-zgndz" to be "success or failure" Jun 2 12:09:51.327: INFO: Pod "pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882997ms Jun 2 12:09:53.331: INFO: Pod "pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006966084s Jun 2 12:09:55.341: INFO: Pod "pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017551299s STEP: Saw pod success Jun 2 12:09:55.341: INFO: Pod "pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:09:55.344: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 12:09:55.364: INFO: Waiting for pod pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018 to disappear Jun 2 12:09:55.369: INFO: Pod pod-projected-configmaps-f62451df-a4c9-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:09:55.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zgndz" for this suite. Jun 2 12:10:01.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:10:01.651: INFO: namespace: e2e-tests-projected-zgndz, resource: bindings, ignored listing per whitelist Jun 2 12:10:01.660: INFO: namespace e2e-tests-projected-zgndz deletion completed in 6.286742458s • [SLOW TEST:10.503 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:10:01.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-5v6n STEP: Creating a pod to test atomic-volume-subpath Jun 2 12:10:01.856: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5v6n" in namespace "e2e-tests-subpath-lwg2g" to be "success or failure" Jun 2 12:10:01.897: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Pending", Reason="", readiness=false. Elapsed: 40.890957ms Jun 2 12:10:03.901: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044866369s Jun 2 12:10:05.906: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04916518s Jun 2 12:10:07.910: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05358189s Jun 2 12:10:09.915: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 8.058417741s Jun 2 12:10:11.919: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 10.062083284s Jun 2 12:10:13.922: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 12.065696672s Jun 2 12:10:15.927: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 14.070207105s Jun 2 12:10:17.931: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 16.074858924s Jun 2 12:10:19.936: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 18.079087867s Jun 2 12:10:21.940: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 20.08355459s Jun 2 12:10:23.945: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 22.088702786s Jun 2 12:10:25.950: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Running", Reason="", readiness=false. Elapsed: 24.093208633s Jun 2 12:10:27.954: INFO: Pod "pod-subpath-test-configmap-5v6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.09782576s STEP: Saw pod success Jun 2 12:10:27.954: INFO: Pod "pod-subpath-test-configmap-5v6n" satisfied condition "success or failure" Jun 2 12:10:27.957: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-5v6n container test-container-subpath-configmap-5v6n: STEP: delete the pod Jun 2 12:10:27.997: INFO: Waiting for pod pod-subpath-test-configmap-5v6n to disappear Jun 2 12:10:28.010: INFO: Pod pod-subpath-test-configmap-5v6n no longer exists STEP: Deleting pod pod-subpath-test-configmap-5v6n Jun 2 12:10:28.010: INFO: Deleting pod "pod-subpath-test-configmap-5v6n" in namespace "e2e-tests-subpath-lwg2g" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:10:28.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lwg2g" for this suite. Jun 2 12:10:34.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:10:34.134: INFO: namespace: e2e-tests-subpath-lwg2g, resource: bindings, ignored listing per whitelist Jun 2 12:10:34.176: INFO: namespace e2e-tests-subpath-lwg2g deletion completed in 6.160034778s • [SLOW TEST:32.515 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:10:34.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 12:10:34.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-8r9rf" to be "success or failure" Jun 2 12:10:34.376: INFO: Pod "downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.596728ms Jun 2 12:10:36.381: INFO: Pod "downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019271592s Jun 2 12:10:38.385: INFO: Pod "downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02295942s STEP: Saw pod success Jun 2 12:10:38.385: INFO: Pod "downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:10:38.387: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 12:10:38.421: INFO: Waiting for pod downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:10:38.430: INFO: Pod downwardapi-volume-0fc5a3bf-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:10:38.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8r9rf" for this suite. Jun 2 12:10:44.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:10:44.559: INFO: namespace: e2e-tests-downward-api-8r9rf, resource: bindings, ignored listing per whitelist Jun 2 12:10:44.587: INFO: namespace e2e-tests-downward-api-8r9rf deletion completed in 6.153430043s • [SLOW TEST:10.411 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:10:44.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 12:10:44.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 2 12:10:44.904: INFO: stderr: "" Jun 2 12:10:44.904: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:10:44.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-89j6m" for this suite. Jun 2 12:10:50.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:10:50.977: INFO: namespace: e2e-tests-kubectl-89j6m, resource: bindings, ignored listing per whitelist Jun 2 12:10:51.052: INFO: namespace e2e-tests-kubectl-89j6m deletion completed in 6.142620345s • [SLOW TEST:6.464 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:10:51.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:10:51.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9z6jc" for this suite. Jun 2 12:10:57.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:10:57.316: INFO: namespace: e2e-tests-kubelet-test-9z6jc, resource: bindings, ignored listing per whitelist Jun 2 12:10:57.349: INFO: namespace e2e-tests-kubelet-test-9z6jc deletion completed in 6.085078041s • [SLOW TEST:6.297 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:10:57.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-1d92ea91-a4ca-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 12:10:57.518: INFO: Waiting up to 5m0s for pod "pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-jr9mv" to be "success or failure" Jun 2 12:10:57.532: INFO: Pod "pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.991109ms Jun 2 12:10:59.536: INFO: Pod "pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017995176s Jun 2 12:11:01.541: INFO: Pod "pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022561963s STEP: Saw pod success Jun 2 12:11:01.541: INFO: Pod "pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:11:01.544: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 12:11:01.564: INFO: Waiting for pod pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:11:01.568: INFO: Pod pod-secrets-1d939e90-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:11:01.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jr9mv" for this suite. Jun 2 12:11:07.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:11:07.597: INFO: namespace: e2e-tests-secrets-jr9mv, resource: bindings, ignored listing per whitelist Jun 2 12:11:07.654: INFO: namespace e2e-tests-secrets-jr9mv deletion completed in 6.082890019s • [SLOW TEST:10.305 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:11:07.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-hkw5f/configmap-test-23b7cbf0-a4ca-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 12:11:07.798: INFO: Waiting up to 5m0s for pod "pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-hkw5f" to be "success or failure" Jun 2 12:11:07.802: INFO: Pod "pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.753136ms Jun 2 12:11:09.807: INFO: Pod "pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008131098s Jun 2 12:11:11.810: INFO: Pod "pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011953385s STEP: Saw pod success Jun 2 12:11:11.811: INFO: Pod "pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:11:11.813: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018 container env-test: STEP: delete the pod Jun 2 12:11:11.943: INFO: Waiting for pod pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:11:11.958: INFO: Pod pod-configmaps-23b92a72-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:11:11.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hkw5f" for this suite. Jun 2 12:11:17.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:11:18.005: INFO: namespace: e2e-tests-configmap-hkw5f, resource: bindings, ignored listing per whitelist Jun 2 12:11:18.067: INFO: namespace e2e-tests-configmap-hkw5f deletion completed in 6.105465941s • [SLOW TEST:10.412 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:11:18.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 12:11:18.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-pxclm' Jun 2 12:11:21.064: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 2 12:11:21.064: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jun 2 12:11:23.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-pxclm' Jun 2 12:11:23.349: INFO: stderr: "" Jun 2 12:11:23.349: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:11:23.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pxclm" for this suite. Jun 2 12:11:39.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:11:39.547: INFO: namespace: e2e-tests-kubectl-pxclm, resource: bindings, ignored listing per whitelist Jun 2 12:11:39.669: INFO: namespace e2e-tests-kubectl-pxclm deletion completed in 16.316376596s • [SLOW TEST:21.601 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:11:39.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 2 12:11:39.758: INFO: Waiting up to 5m0s for pod "pod-36c45946-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-2hlhx" to be "success or failure" Jun 2 12:11:39.792: INFO: Pod "pod-36c45946-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.591337ms Jun 2 12:11:41.796: INFO: Pod "pod-36c45946-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037982695s Jun 2 12:11:43.801: INFO: Pod "pod-36c45946-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042601932s STEP: Saw pod success Jun 2 12:11:43.801: INFO: Pod "pod-36c45946-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:11:43.804: INFO: Trying to get logs from node hunter-worker pod pod-36c45946-a4ca-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:11:43.879: INFO: Waiting for pod pod-36c45946-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:11:43.888: INFO: Pod pod-36c45946-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:11:43.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2hlhx" for this suite. Jun 2 12:11:49.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:11:49.947: INFO: namespace: e2e-tests-emptydir-2hlhx, resource: bindings, ignored listing per whitelist Jun 2 12:11:49.987: INFO: namespace e2e-tests-emptydir-2hlhx deletion completed in 6.094892338s • [SLOW TEST:10.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:11:49.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:12:25.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-2p7hb" for this suite. Jun 2 12:12:31.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:12:31.861: INFO: namespace: e2e-tests-container-runtime-2p7hb, resource: bindings, ignored listing per whitelist Jun 2 12:12:31.876: INFO: namespace e2e-tests-container-runtime-2p7hb deletion completed in 6.157442087s • [SLOW TEST:41.889 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:12:31.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 2 12:12:32.065: INFO: Waiting up to 5m0s for pod "downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-xh48x" to be "success or failure" Jun 2 12:12:32.116: INFO: Pod "downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 51.032351ms Jun 2 12:12:34.121: INFO: Pod "downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055848455s Jun 2 12:12:36.152: INFO: Pod "downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086890621s STEP: Saw pod success Jun 2 12:12:36.152: INFO: Pod "downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:12:36.155: INFO: Trying to get logs from node hunter-worker pod downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018 container dapi-container: STEP: delete the pod Jun 2 12:12:36.203: INFO: Waiting for pod downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:12:36.218: INFO: Pod downward-api-55ecfcba-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:12:36.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xh48x" for this suite. Jun 2 12:12:42.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:12:42.266: INFO: namespace: e2e-tests-downward-api-xh48x, resource: bindings, ignored listing per whitelist Jun 2 12:12:42.307: INFO: namespace e2e-tests-downward-api-xh48x deletion completed in 6.084632345s • [SLOW TEST:10.430 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:12:42.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jun 2 12:12:42.500: INFO: Waiting up to 5m0s for pod "client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-containers-tnf4g" to be "success or failure" Jun 2 12:12:42.506: INFO: Pod "client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141869ms Jun 2 12:12:44.521: INFO: Pod "client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021140247s Jun 2 12:12:46.525: INFO: Pod "client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02500795s STEP: Saw pod success Jun 2 12:12:46.525: INFO: Pod "client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:12:46.527: INFO: Trying to get logs from node hunter-worker2 pod client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:12:46.583: INFO: Waiting for pod client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:12:46.595: INFO: Pod client-containers-5c2955c4-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:12:46.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tnf4g" for this suite. Jun 2 12:12:52.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:12:52.636: INFO: namespace: e2e-tests-containers-tnf4g, resource: bindings, ignored listing per whitelist Jun 2 12:12:52.680: INFO: namespace e2e-tests-containers-tnf4g deletion completed in 6.080981269s • [SLOW TEST:10.374 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:12:52.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 12:12:52.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-downward-api-vqwlr" to be "success or failure" Jun 2 12:12:52.878: INFO: Pod "downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.682714ms Jun 2 12:12:54.881: INFO: Pod "downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055983169s Jun 2 12:12:56.885: INFO: Pod "downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060165203s STEP: Saw pod success Jun 2 12:12:56.885: INFO: Pod "downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:12:56.888: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 12:12:56.987: INFO: Waiting for pod downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:12:57.123: INFO: Pod downwardapi-volume-6252e301-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:12:57.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vqwlr" for this suite. Jun 2 12:13:03.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:13:03.216: INFO: namespace: e2e-tests-downward-api-vqwlr, resource: bindings, ignored listing per whitelist Jun 2 12:13:03.220: INFO: namespace e2e-tests-downward-api-vqwlr deletion completed in 6.093059191s • [SLOW TEST:10.539 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:13:03.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-68946af0-a4ca-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 12:13:03.360: INFO: Waiting up to 5m0s for pod "pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-l8c8j" to be "success or failure" Jun 2 12:13:03.416: INFO: Pod "pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 55.772688ms Jun 2 12:13:05.419: INFO: Pod "pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05834682s Jun 2 12:13:07.422: INFO: Pod "pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061302371s STEP: Saw pod success Jun 2 12:13:07.422: INFO: Pod "pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:13:07.424: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 12:13:07.465: INFO: Waiting for pod pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:13:07.477: INFO: Pod pod-secrets-689885d3-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:13:07.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l8c8j" for this suite. Jun 2 12:13:13.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:13:13.575: INFO: namespace: e2e-tests-secrets-l8c8j, resource: bindings, ignored listing per whitelist Jun 2 12:13:13.593: INFO: namespace e2e-tests-secrets-l8c8j deletion completed in 6.113622738s • [SLOW TEST:10.373 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:13:13.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jun 2 12:13:13.705: INFO: Waiting up to 5m0s for pod "client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-containers-gxwgn" to be "success or failure" Jun 2 12:13:13.710: INFO: Pod "client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718022ms Jun 2 12:13:15.713: INFO: Pod "client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008328209s Jun 2 12:13:17.717: INFO: Pod "client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012236833s STEP: Saw pod success Jun 2 12:13:17.717: INFO: Pod "client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:13:17.720: INFO: Trying to get logs from node hunter-worker pod client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:13:17.739: INFO: Waiting for pod client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:13:17.742: INFO: Pod client-containers-6ec372f9-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:13:17.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-gxwgn" for this suite. Jun 2 12:13:23.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:13:23.832: INFO: namespace: e2e-tests-containers-gxwgn, resource: bindings, ignored listing per whitelist Jun 2 12:13:23.834: INFO: namespace e2e-tests-containers-gxwgn deletion completed in 6.088389931s • [SLOW TEST:10.240 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:13:23.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:13:23.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-nqhrx" for this suite. Jun 2 12:13:29.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:13:30.019: INFO: namespace: e2e-tests-services-nqhrx, resource: bindings, ignored listing per whitelist Jun 2 12:13:30.031: INFO: namespace e2e-tests-services-nqhrx deletion completed in 6.090956865s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.197 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:13:30.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 2 12:13:30.187: INFO: Waiting up to 5m0s for pod "pod-7896c9c2-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-qlclh" to be "success or failure" Jun 2 12:13:30.206: INFO: Pod "pod-7896c9c2-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.208533ms Jun 2 12:13:32.269: INFO: Pod "pod-7896c9c2-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08250421s Jun 2 12:13:34.274: INFO: Pod "pod-7896c9c2-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08732045s STEP: Saw pod success Jun 2 12:13:34.274: INFO: Pod "pod-7896c9c2-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:13:34.277: INFO: Trying to get logs from node hunter-worker2 pod pod-7896c9c2-a4ca-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:13:34.323: INFO: Waiting for pod pod-7896c9c2-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:13:34.334: INFO: Pod pod-7896c9c2-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:13:34.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qlclh" for this suite. Jun 2 12:13:40.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:13:40.383: INFO: namespace: e2e-tests-emptydir-qlclh, resource: bindings, ignored listing per whitelist Jun 2 12:13:40.431: INFO: namespace e2e-tests-emptydir-qlclh deletion completed in 6.093940767s • [SLOW TEST:10.400 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:13:40.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jun 2 12:13:40.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:40.843: INFO: stderr: "" Jun 2 12:13:40.843: INFO: stdout: "pod/pause created\n" Jun 2 12:13:40.843: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 2 12:13:40.843: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-dhq66" to be "running and ready" Jun 2 12:13:40.861: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.431466ms Jun 2 12:13:42.865: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022505741s Jun 2 12:13:44.869: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.026414264s Jun 2 12:13:44.869: INFO: Pod "pause" satisfied condition "running and ready" Jun 2 12:13:44.869: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jun 2 12:13:44.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:44.972: INFO: stderr: "" Jun 2 12:13:44.972: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 2 12:13:44.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:45.064: INFO: stderr: "" Jun 2 12:13:45.064: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 2 12:13:45.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:45.175: INFO: stderr: "" Jun 2 12:13:45.175: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 2 12:13:45.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:45.280: INFO: stderr: "" Jun 2 12:13:45.280: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jun 2 12:13:45.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:45.398: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:13:45.398: INFO: stdout: "pod \"pause\" force deleted\n" Jun 2 12:13:45.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-dhq66' Jun 2 12:13:45.501: INFO: stderr: "No resources found.\n" Jun 2 12:13:45.501: INFO: stdout: "" Jun 2 12:13:45.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-dhq66 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 2 12:13:45.592: INFO: stderr: "" Jun 2 12:13:45.592: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:13:45.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dhq66" for this suite. Jun 2 12:13:51.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:13:51.703: INFO: namespace: e2e-tests-kubectl-dhq66, resource: bindings, ignored listing per whitelist Jun 2 12:13:51.770: INFO: namespace e2e-tests-kubectl-dhq66 deletion completed in 6.175223503s • [SLOW TEST:11.339 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:13:51.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-8591f71b-a4ca-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 12:13:51.988: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-t5m97" to be "success or failure" Jun 2 12:13:52.004: INFO: Pod "pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.829492ms Jun 2 12:13:54.009: INFO: Pod "pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020517834s Jun 2 12:13:56.013: INFO: Pod "pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0240888s STEP: Saw pod success Jun 2 12:13:56.013: INFO: Pod "pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:13:56.015: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 12:13:56.030: INFO: Waiting for pod pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:13:56.034: INFO: Pod pod-projected-configmaps-85965d22-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:13:56.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t5m97" for this suite. Jun 2 12:14:02.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:14:02.169: INFO: namespace: e2e-tests-projected-t5m97, resource: bindings, ignored listing per whitelist Jun 2 12:14:02.182: INFO: namespace e2e-tests-projected-t5m97 deletion completed in 6.144456361s • [SLOW TEST:10.411 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:14:02.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jun 2 12:14:02.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 2 12:14:02.386: INFO: stderr: "" Jun 2 12:14:02.386: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:14:02.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zqbgw" for this suite. Jun 2 12:14:08.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:14:08.452: INFO: namespace: e2e-tests-kubectl-zqbgw, resource: bindings, ignored listing per whitelist Jun 2 12:14:08.494: INFO: namespace e2e-tests-kubectl-zqbgw deletion completed in 6.088684889s • [SLOW TEST:6.313 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:14:08.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-8f7914b8-a4ca-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 12:14:08.592: INFO: Waiting up to 5m0s for pod "pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-kl5lq" to be "success or failure" Jun 2 12:14:08.605: INFO: Pod "pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.48987ms Jun 2 12:14:10.609: INFO: Pod "pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017575973s Jun 2 12:14:12.613: INFO: Pod "pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021557043s STEP: Saw pod success Jun 2 12:14:12.613: INFO: Pod "pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:14:12.616: INFO: Trying to get logs from node hunter-worker pod pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 2 12:14:12.670: INFO: Waiting for pod pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:14:12.681: INFO: Pod pod-secrets-8f79c62f-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:14:12.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kl5lq" for this suite. Jun 2 12:14:18.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:14:18.737: INFO: namespace: e2e-tests-secrets-kl5lq, resource: bindings, ignored listing per whitelist Jun 2 12:14:18.774: INFO: namespace e2e-tests-secrets-kl5lq deletion completed in 6.089580211s • [SLOW TEST:10.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:14:18.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 2 12:14:18.912: INFO: Waiting up to 5m0s for pod "pod-95a10f0d-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-7slqb" to be "success or failure" Jun 2 12:14:18.915: INFO: Pod "pod-95a10f0d-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.005302ms Jun 2 12:14:20.918: INFO: Pod "pod-95a10f0d-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00667318s Jun 2 12:14:22.923: INFO: Pod "pod-95a10f0d-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011126779s STEP: Saw pod success Jun 2 12:14:22.923: INFO: Pod "pod-95a10f0d-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:14:22.926: INFO: Trying to get logs from node hunter-worker2 pod pod-95a10f0d-a4ca-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:14:22.975: INFO: Waiting for pod pod-95a10f0d-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:14:23.005: INFO: Pod pod-95a10f0d-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:14:23.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7slqb" for this suite. Jun 2 12:14:29.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:14:29.121: INFO: namespace: e2e-tests-emptydir-7slqb, resource: bindings, ignored listing per whitelist Jun 2 12:14:29.136: INFO: namespace e2e-tests-emptydir-7slqb deletion completed in 6.127504537s • [SLOW TEST:10.362 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:14:29.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-kzxg STEP: Creating a pod to test atomic-volume-subpath Jun 2 12:14:29.314: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kzxg" in namespace "e2e-tests-subpath-v59hk" to be "success or failure" Jun 2 12:14:29.336: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.813295ms Jun 2 12:14:31.467: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153201564s Jun 2 12:14:33.501: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187733278s Jun 2 12:14:35.506: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=true. Elapsed: 6.192320678s Jun 2 12:14:37.509: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 8.195533943s Jun 2 12:14:39.514: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 10.200211474s Jun 2 12:14:41.518: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 12.204862362s Jun 2 12:14:43.523: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 14.20945027s Jun 2 12:14:45.528: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 16.214210759s Jun 2 12:14:47.532: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 18.217971388s Jun 2 12:14:49.536: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 20.222705437s Jun 2 12:14:51.541: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 22.227784655s Jun 2 12:14:53.546: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Running", Reason="", readiness=false. Elapsed: 24.232226876s Jun 2 12:14:55.550: INFO: Pod "pod-subpath-test-projected-kzxg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.236746065s STEP: Saw pod success Jun 2 12:14:55.550: INFO: Pod "pod-subpath-test-projected-kzxg" satisfied condition "success or failure" Jun 2 12:14:55.553: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-kzxg container test-container-subpath-projected-kzxg: STEP: delete the pod Jun 2 12:14:55.588: INFO: Waiting for pod pod-subpath-test-projected-kzxg to disappear Jun 2 12:14:55.604: INFO: Pod pod-subpath-test-projected-kzxg no longer exists STEP: Deleting pod pod-subpath-test-projected-kzxg Jun 2 12:14:55.604: INFO: Deleting pod "pod-subpath-test-projected-kzxg" in namespace "e2e-tests-subpath-v59hk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:14:55.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-v59hk" for this suite. Jun 2 12:15:01.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:15:01.687: INFO: namespace: e2e-tests-subpath-v59hk, resource: bindings, ignored listing per whitelist Jun 2 12:15:01.703: INFO: namespace e2e-tests-subpath-v59hk deletion completed in 6.093067263s • [SLOW TEST:32.567 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:15:01.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 12:15:01.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-d272n' Jun 2 12:15:01.929: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 2 12:15:01.929: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jun 2 12:15:05.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-d272n' Jun 2 12:15:06.109: INFO: stderr: "" Jun 2 12:15:06.109: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:15:06.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-d272n" for this suite. Jun 2 12:15:12.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:15:12.192: INFO: namespace: e2e-tests-kubectl-d272n, resource: bindings, ignored listing per whitelist Jun 2 12:15:12.264: INFO: namespace e2e-tests-kubectl-d272n deletion completed in 6.102021337s • [SLOW TEST:10.561 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:15:12.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-tq9q STEP: Creating a pod to test atomic-volume-subpath Jun 2 12:15:12.424: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tq9q" in namespace "e2e-tests-subpath-mcfgn" to be "success or failure" Jun 2 12:15:12.437: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Pending", Reason="", readiness=false. Elapsed: 13.342129ms Jun 2 12:15:14.441: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01730976s Jun 2 12:15:16.445: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021241646s Jun 2 12:15:18.449: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=true. Elapsed: 6.025240288s Jun 2 12:15:20.454: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 8.029473937s Jun 2 12:15:22.459: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 10.035043342s Jun 2 12:15:24.464: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 12.039768583s Jun 2 12:15:26.468: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 14.044187957s Jun 2 12:15:28.475: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 16.05101346s Jun 2 12:15:30.479: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 18.055360652s Jun 2 12:15:32.484: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 20.05981701s Jun 2 12:15:34.488: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 22.063993772s Jun 2 12:15:36.492: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Running", Reason="", readiness=false. Elapsed: 24.068289246s Jun 2 12:15:38.497: INFO: Pod "pod-subpath-test-secret-tq9q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.072814865s STEP: Saw pod success Jun 2 12:15:38.497: INFO: Pod "pod-subpath-test-secret-tq9q" satisfied condition "success or failure" Jun 2 12:15:38.500: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-tq9q container test-container-subpath-secret-tq9q: STEP: delete the pod Jun 2 12:15:38.542: INFO: Waiting for pod pod-subpath-test-secret-tq9q to disappear Jun 2 12:15:38.561: INFO: Pod pod-subpath-test-secret-tq9q no longer exists STEP: Deleting pod pod-subpath-test-secret-tq9q Jun 2 12:15:38.561: INFO: Deleting pod "pod-subpath-test-secret-tq9q" in namespace "e2e-tests-subpath-mcfgn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:15:38.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mcfgn" for this suite. Jun 2 12:15:44.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:15:44.644: INFO: namespace: e2e-tests-subpath-mcfgn, resource: bindings, ignored listing per whitelist Jun 2 12:15:44.659: INFO: namespace e2e-tests-subpath-mcfgn deletion completed in 6.091388623s • [SLOW TEST:32.394 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:15:44.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c8cfbf60-a4ca-11ea-889d-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-c8cfc01f-a4ca-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c8cfbf60-a4ca-11ea-889d-0242ac110018 STEP: Updating configmap cm-test-opt-upd-c8cfc01f-a4ca-11ea-889d-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-c8cfc053-a4ca-11ea-889d-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:15:52.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rc44w" for this suite. Jun 2 12:16:14.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:16:14.974: INFO: namespace: e2e-tests-configmap-rc44w, resource: bindings, ignored listing per whitelist Jun 2 12:16:15.015: INFO: namespace e2e-tests-configmap-rc44w deletion completed in 22.099266615s • [SLOW TEST:30.356 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:16:15.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 2 12:16:15.162: INFO: Waiting up to 5m0s for pod "pod-daecb5c1-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-qklg9" to be "success or failure" Jun 2 12:16:15.167: INFO: Pod "pod-daecb5c1-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.482831ms Jun 2 12:16:17.171: INFO: Pod "pod-daecb5c1-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009053434s Jun 2 12:16:19.252: INFO: Pod "pod-daecb5c1-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089554719s STEP: Saw pod success Jun 2 12:16:19.252: INFO: Pod "pod-daecb5c1-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:16:19.255: INFO: Trying to get logs from node hunter-worker pod pod-daecb5c1-a4ca-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:16:19.312: INFO: Waiting for pod pod-daecb5c1-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:16:19.337: INFO: Pod pod-daecb5c1-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:16:19.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qklg9" for this suite. Jun 2 12:16:25.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:16:25.515: INFO: namespace: e2e-tests-emptydir-qklg9, resource: bindings, ignored listing per whitelist Jun 2 12:16:25.524: INFO: namespace e2e-tests-emptydir-qklg9 deletion completed in 6.18340515s • [SLOW TEST:10.508 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:16:25.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-ncxjd/secret-test-e12a427a-a4ca-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 12:16:25.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018" in namespace "e2e-tests-secrets-ncxjd" to be "success or failure" Jun 2 12:16:25.739: INFO: Pod "pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259311ms Jun 2 12:16:27.743: INFO: Pod "pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007437446s Jun 2 12:16:29.748: INFO: Pod "pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012488279s STEP: Saw pod success Jun 2 12:16:29.748: INFO: Pod "pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:16:29.751: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018 container env-test: STEP: delete the pod Jun 2 12:16:29.778: INFO: Waiting for pod pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018 to disappear Jun 2 12:16:29.801: INFO: Pod pod-configmaps-e138795a-a4ca-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:16:29.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ncxjd" for this suite. Jun 2 12:16:35.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:16:35.874: INFO: namespace: e2e-tests-secrets-ncxjd, resource: bindings, ignored listing per whitelist Jun 2 12:16:35.882: INFO: namespace e2e-tests-secrets-ncxjd deletion completed in 6.076735247s • [SLOW TEST:10.358 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:16:35.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-knq2s [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-knq2s STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-knq2s Jun 2 12:16:35.999: INFO: Found 0 stateful pods, waiting for 1 Jun 2 12:16:46.004: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 2 12:16:46.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 12:16:46.277: INFO: stderr: "I0602 12:16:46.137693 3241 log.go:172] (0xc000138630) (0xc000750640) Create stream\nI0602 12:16:46.137759 3241 log.go:172] (0xc000138630) (0xc000750640) Stream added, broadcasting: 1\nI0602 12:16:46.141070 3241 log.go:172] (0xc000138630) Reply frame received for 1\nI0602 12:16:46.141355 3241 log.go:172] (0xc000138630) (0xc00001cd20) Create stream\nI0602 12:16:46.141383 3241 log.go:172] (0xc000138630) (0xc00001cd20) Stream added, broadcasting: 3\nI0602 12:16:46.143316 3241 log.go:172] (0xc000138630) Reply frame received for 3\nI0602 12:16:46.143354 3241 log.go:172] (0xc000138630) (0xc0004dc000) Create stream\nI0602 12:16:46.143365 3241 log.go:172] (0xc000138630) (0xc0004dc000) Stream added, broadcasting: 5\nI0602 12:16:46.144512 3241 log.go:172] (0xc000138630) Reply frame received for 5\nI0602 12:16:46.269915 3241 log.go:172] (0xc000138630) Data frame received for 3\nI0602 12:16:46.269939 3241 log.go:172] (0xc00001cd20) (3) Data frame handling\nI0602 12:16:46.269947 3241 log.go:172] (0xc00001cd20) (3) Data frame sent\nI0602 12:16:46.269952 3241 log.go:172] (0xc000138630) Data frame received for 3\nI0602 12:16:46.269957 3241 log.go:172] (0xc00001cd20) (3) Data frame handling\nI0602 12:16:46.269993 3241 log.go:172] (0xc000138630) Data frame received for 5\nI0602 12:16:46.270027 3241 log.go:172] (0xc0004dc000) (5) Data frame handling\nI0602 12:16:46.271642 3241 log.go:172] (0xc000138630) Data frame received for 1\nI0602 12:16:46.271656 3241 log.go:172] (0xc000750640) (1) Data frame handling\nI0602 12:16:46.271673 3241 log.go:172] (0xc000750640) (1) Data frame sent\nI0602 12:16:46.271873 3241 log.go:172] (0xc000138630) (0xc000750640) Stream removed, broadcasting: 1\nI0602 12:16:46.271911 3241 log.go:172] (0xc000138630) Go away received\nI0602 12:16:46.272062 3241 log.go:172] (0xc000138630) (0xc000750640) Stream removed, broadcasting: 1\nI0602 12:16:46.272080 3241 log.go:172] (0xc000138630) (0xc00001cd20) Stream removed, broadcasting: 3\nI0602 12:16:46.272089 3241 log.go:172] (0xc000138630) (0xc0004dc000) Stream removed, broadcasting: 5\n" Jun 2 12:16:46.277: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 12:16:46.277: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 12:16:46.281: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 2 12:16:56.284: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 2 12:16:56.285: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 12:16:56.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999916s Jun 2 12:16:57.335: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.966929737s Jun 2 12:16:58.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.961298841s Jun 2 12:16:59.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.955238855s Jun 2 12:17:00.420: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.939484385s Jun 2 12:17:01.425: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.876000108s Jun 2 12:17:02.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.870825971s Jun 2 12:17:03.468: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.839651014s Jun 2 12:17:04.473: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.82823285s Jun 2 12:17:05.479: INFO: Verifying statefulset ss doesn't scale past 1 for another 823.036138ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-knq2s Jun 2 12:17:06.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 12:17:06.732: INFO: stderr: "I0602 12:17:06.627100 3264 log.go:172] (0xc00078e2c0) (0xc0006bc640) Create stream\nI0602 12:17:06.627174 3264 log.go:172] (0xc00078e2c0) (0xc0006bc640) Stream added, broadcasting: 1\nI0602 12:17:06.630676 3264 log.go:172] (0xc00078e2c0) Reply frame received for 1\nI0602 12:17:06.630723 3264 log.go:172] (0xc00078e2c0) (0xc00062cdc0) Create stream\nI0602 12:17:06.630734 3264 log.go:172] (0xc00078e2c0) (0xc00062cdc0) Stream added, broadcasting: 3\nI0602 12:17:06.631938 3264 log.go:172] (0xc00078e2c0) Reply frame received for 3\nI0602 12:17:06.631984 3264 log.go:172] (0xc00078e2c0) (0xc0006bc6e0) Create stream\nI0602 12:17:06.632000 3264 log.go:172] (0xc00078e2c0) (0xc0006bc6e0) Stream added, broadcasting: 5\nI0602 12:17:06.633360 3264 log.go:172] (0xc00078e2c0) Reply frame received for 5\nI0602 12:17:06.726696 3264 log.go:172] (0xc00078e2c0) Data frame received for 5\nI0602 12:17:06.726750 3264 log.go:172] (0xc0006bc6e0) (5) Data frame handling\nI0602 12:17:06.726817 3264 log.go:172] (0xc00078e2c0) Data frame received for 3\nI0602 12:17:06.726865 3264 log.go:172] (0xc00062cdc0) (3) Data frame handling\nI0602 12:17:06.726906 3264 log.go:172] (0xc00062cdc0) (3) Data frame sent\nI0602 12:17:06.726931 3264 log.go:172] (0xc00078e2c0) Data frame received for 3\nI0602 12:17:06.726943 3264 log.go:172] (0xc00062cdc0) (3) Data frame handling\nI0602 12:17:06.728272 3264 log.go:172] (0xc00078e2c0) Data frame received for 1\nI0602 12:17:06.728299 3264 log.go:172] (0xc0006bc640) (1) Data frame handling\nI0602 12:17:06.728325 3264 log.go:172] (0xc0006bc640) (1) Data frame sent\nI0602 12:17:06.728350 3264 log.go:172] (0xc00078e2c0) (0xc0006bc640) Stream removed, broadcasting: 1\nI0602 12:17:06.728461 3264 log.go:172] (0xc00078e2c0) Go away received\nI0602 12:17:06.728580 3264 log.go:172] (0xc00078e2c0) (0xc0006bc640) Stream removed, broadcasting: 1\nI0602 12:17:06.728604 3264 log.go:172] (0xc00078e2c0) (0xc00062cdc0) Stream removed, broadcasting: 3\nI0602 12:17:06.728615 3264 log.go:172] (0xc00078e2c0) (0xc0006bc6e0) Stream removed, broadcasting: 5\n" Jun 2 12:17:06.733: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 12:17:06.733: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 12:17:06.736: INFO: Found 1 stateful pods, waiting for 3 Jun 2 12:17:16.742: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 2 12:17:16.742: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 2 12:17:16.742: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 2 12:17:16.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 12:17:16.988: INFO: stderr: "I0602 12:17:16.888695 3286 log.go:172] (0xc00016c840) (0xc000734640) Create stream\nI0602 12:17:16.888747 3286 log.go:172] (0xc00016c840) (0xc000734640) Stream added, broadcasting: 1\nI0602 12:17:16.890894 3286 log.go:172] (0xc00016c840) Reply frame received for 1\nI0602 12:17:16.890937 3286 log.go:172] (0xc00016c840) (0xc0007346e0) Create stream\nI0602 12:17:16.890947 3286 log.go:172] (0xc00016c840) (0xc0007346e0) Stream added, broadcasting: 3\nI0602 12:17:16.891809 3286 log.go:172] (0xc00016c840) Reply frame received for 3\nI0602 12:17:16.891849 3286 log.go:172] (0xc00016c840) (0xc0005e2be0) Create stream\nI0602 12:17:16.891871 3286 log.go:172] (0xc00016c840) (0xc0005e2be0) Stream added, broadcasting: 5\nI0602 12:17:16.892961 3286 log.go:172] (0xc00016c840) Reply frame received for 5\nI0602 12:17:16.982089 3286 log.go:172] (0xc00016c840) Data frame received for 3\nI0602 12:17:16.982141 3286 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0602 12:17:16.982155 3286 log.go:172] (0xc0007346e0) (3) Data frame sent\nI0602 12:17:16.982165 3286 log.go:172] (0xc00016c840) Data frame received for 3\nI0602 12:17:16.982171 3286 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0602 12:17:16.982206 3286 log.go:172] (0xc00016c840) Data frame received for 5\nI0602 12:17:16.982215 3286 log.go:172] (0xc0005e2be0) (5) Data frame handling\nI0602 12:17:16.983355 3286 log.go:172] (0xc00016c840) Data frame received for 1\nI0602 12:17:16.983386 3286 log.go:172] (0xc000734640) (1) Data frame handling\nI0602 12:17:16.983406 3286 log.go:172] (0xc000734640) (1) Data frame sent\nI0602 12:17:16.983430 3286 log.go:172] (0xc00016c840) (0xc000734640) Stream removed, broadcasting: 1\nI0602 12:17:16.983450 3286 log.go:172] (0xc00016c840) Go away received\nI0602 12:17:16.983758 3286 log.go:172] (0xc00016c840) (0xc000734640) Stream removed, broadcasting: 1\nI0602 12:17:16.983774 3286 log.go:172] (0xc00016c840) (0xc0007346e0) Stream removed, broadcasting: 3\nI0602 12:17:16.983782 3286 log.go:172] (0xc00016c840) (0xc0005e2be0) Stream removed, broadcasting: 5\n" Jun 2 12:17:16.988: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 12:17:16.988: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 12:17:16.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 12:17:17.295: INFO: stderr: "I0602 12:17:17.151439 3309 log.go:172] (0xc0006b2370) (0xc0007bd4a0) Create stream\nI0602 12:17:17.151493 3309 log.go:172] (0xc0006b2370) (0xc0007bd4a0) Stream added, broadcasting: 1\nI0602 12:17:17.154347 3309 log.go:172] (0xc0006b2370) Reply frame received for 1\nI0602 12:17:17.154386 3309 log.go:172] (0xc0006b2370) (0xc000106000) Create stream\nI0602 12:17:17.154395 3309 log.go:172] (0xc0006b2370) (0xc000106000) Stream added, broadcasting: 3\nI0602 12:17:17.155515 3309 log.go:172] (0xc0006b2370) Reply frame received for 3\nI0602 12:17:17.155581 3309 log.go:172] (0xc0006b2370) (0xc0007bd540) Create stream\nI0602 12:17:17.155611 3309 log.go:172] (0xc0006b2370) (0xc0007bd540) Stream added, broadcasting: 5\nI0602 12:17:17.156701 3309 log.go:172] (0xc0006b2370) Reply frame received for 5\nI0602 12:17:17.285775 3309 log.go:172] (0xc0006b2370) Data frame received for 3\nI0602 12:17:17.285829 3309 log.go:172] (0xc000106000) (3) Data frame handling\nI0602 12:17:17.285852 3309 log.go:172] (0xc000106000) (3) Data frame sent\nI0602 12:17:17.285879 3309 log.go:172] (0xc0006b2370) Data frame received for 3\nI0602 12:17:17.285915 3309 log.go:172] (0xc000106000) (3) Data frame handling\nI0602 12:17:17.286062 3309 log.go:172] (0xc0006b2370) Data frame received for 5\nI0602 12:17:17.286130 3309 log.go:172] (0xc0007bd540) (5) Data frame handling\nI0602 12:17:17.288437 3309 log.go:172] (0xc0006b2370) Data frame received for 1\nI0602 12:17:17.288469 3309 log.go:172] (0xc0007bd4a0) (1) Data frame handling\nI0602 12:17:17.288498 3309 log.go:172] (0xc0007bd4a0) (1) Data frame sent\nI0602 12:17:17.288539 3309 log.go:172] (0xc0006b2370) (0xc0007bd4a0) Stream removed, broadcasting: 1\nI0602 12:17:17.288621 3309 log.go:172] (0xc0006b2370) Go away received\nI0602 12:17:17.288758 3309 log.go:172] (0xc0006b2370) (0xc0007bd4a0) Stream removed, broadcasting: 1\nI0602 12:17:17.288779 3309 log.go:172] (0xc0006b2370) (0xc000106000) Stream removed, broadcasting: 3\nI0602 12:17:17.288800 3309 log.go:172] (0xc0006b2370) (0xc0007bd540) Stream removed, broadcasting: 5\n" Jun 2 12:17:17.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 12:17:17.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 12:17:17.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 2 12:17:17.597: INFO: stderr: "I0602 12:17:17.443601 3331 log.go:172] (0xc000138840) (0xc000702640) Create stream\nI0602 12:17:17.443662 3331 log.go:172] (0xc000138840) (0xc000702640) Stream added, broadcasting: 1\nI0602 12:17:17.445789 3331 log.go:172] (0xc000138840) Reply frame received for 1\nI0602 12:17:17.445826 3331 log.go:172] (0xc000138840) (0xc0005c0c80) Create stream\nI0602 12:17:17.445839 3331 log.go:172] (0xc000138840) (0xc0005c0c80) Stream added, broadcasting: 3\nI0602 12:17:17.446538 3331 log.go:172] (0xc000138840) Reply frame received for 3\nI0602 12:17:17.446557 3331 log.go:172] (0xc000138840) (0xc0007026e0) Create stream\nI0602 12:17:17.446564 3331 log.go:172] (0xc000138840) (0xc0007026e0) Stream added, broadcasting: 5\nI0602 12:17:17.447361 3331 log.go:172] (0xc000138840) Reply frame received for 5\nI0602 12:17:17.589782 3331 log.go:172] (0xc000138840) Data frame received for 3\nI0602 12:17:17.589804 3331 log.go:172] (0xc0005c0c80) (3) Data frame handling\nI0602 12:17:17.589814 3331 log.go:172] (0xc0005c0c80) (3) Data frame sent\nI0602 12:17:17.590028 3331 log.go:172] (0xc000138840) Data frame received for 5\nI0602 12:17:17.590056 3331 log.go:172] (0xc0007026e0) (5) Data frame handling\nI0602 12:17:17.590135 3331 log.go:172] (0xc000138840) Data frame received for 3\nI0602 12:17:17.590145 3331 log.go:172] (0xc0005c0c80) (3) Data frame handling\nI0602 12:17:17.591793 3331 log.go:172] (0xc000138840) Data frame received for 1\nI0602 12:17:17.591821 3331 log.go:172] (0xc000702640) (1) Data frame handling\nI0602 12:17:17.591837 3331 log.go:172] (0xc000702640) (1) Data frame sent\nI0602 12:17:17.591876 3331 log.go:172] (0xc000138840) (0xc000702640) Stream removed, broadcasting: 1\nI0602 12:17:17.591907 3331 log.go:172] (0xc000138840) Go away received\nI0602 12:17:17.592138 3331 log.go:172] (0xc000138840) (0xc000702640) Stream removed, broadcasting: 1\nI0602 12:17:17.592177 3331 log.go:172] (0xc000138840) (0xc0005c0c80) Stream removed, broadcasting: 3\nI0602 12:17:17.592203 3331 log.go:172] (0xc000138840) (0xc0007026e0) Stream removed, broadcasting: 5\n" Jun 2 12:17:17.597: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 2 12:17:17.597: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 2 12:17:17.597: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 12:17:17.601: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 2 12:17:27.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 2 12:17:27.608: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 2 12:17:27.608: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 2 12:17:27.618: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999544s Jun 2 12:17:28.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996851211s Jun 2 12:17:29.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991394577s Jun 2 12:17:30.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977692869s Jun 2 12:17:31.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972026646s Jun 2 12:17:32.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966695666s Jun 2 12:17:33.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.961147643s Jun 2 12:17:34.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.956241331s Jun 2 12:17:35.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.950162415s Jun 2 12:17:36.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 944.749611ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-knq2s Jun 2 12:17:37.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 12:17:37.889: INFO: stderr: "I0602 12:17:37.814767 3353 log.go:172] (0xc000138840) (0xc000748640) Create stream\nI0602 12:17:37.814843 3353 log.go:172] (0xc000138840) (0xc000748640) Stream added, broadcasting: 1\nI0602 12:17:37.817280 3353 log.go:172] (0xc000138840) Reply frame received for 1\nI0602 12:17:37.817322 3353 log.go:172] (0xc000138840) (0xc0005f0d20) Create stream\nI0602 12:17:37.817334 3353 log.go:172] (0xc000138840) (0xc0005f0d20) Stream added, broadcasting: 3\nI0602 12:17:37.817982 3353 log.go:172] (0xc000138840) Reply frame received for 3\nI0602 12:17:37.818011 3353 log.go:172] (0xc000138840) (0xc0005f0e60) Create stream\nI0602 12:17:37.818020 3353 log.go:172] (0xc000138840) (0xc0005f0e60) Stream added, broadcasting: 5\nI0602 12:17:37.818728 3353 log.go:172] (0xc000138840) Reply frame received for 5\nI0602 12:17:37.882424 3353 log.go:172] (0xc000138840) Data frame received for 5\nI0602 12:17:37.882451 3353 log.go:172] (0xc0005f0e60) (5) Data frame handling\nI0602 12:17:37.882469 3353 log.go:172] (0xc000138840) Data frame received for 3\nI0602 12:17:37.882474 3353 log.go:172] (0xc0005f0d20) (3) Data frame handling\nI0602 12:17:37.882483 3353 log.go:172] (0xc0005f0d20) (3) Data frame sent\nI0602 12:17:37.882490 3353 log.go:172] (0xc000138840) Data frame received for 3\nI0602 12:17:37.882497 3353 log.go:172] (0xc0005f0d20) (3) Data frame handling\nI0602 12:17:37.884039 3353 log.go:172] (0xc000138840) Data frame received for 1\nI0602 12:17:37.884081 3353 log.go:172] (0xc000748640) (1) Data frame handling\nI0602 12:17:37.884107 3353 log.go:172] (0xc000748640) (1) Data frame sent\nI0602 12:17:37.884143 3353 log.go:172] (0xc000138840) (0xc000748640) Stream removed, broadcasting: 1\nI0602 12:17:37.884179 3353 log.go:172] (0xc000138840) Go away received\nI0602 12:17:37.884377 3353 log.go:172] (0xc000138840) (0xc000748640) Stream removed, broadcasting: 1\nI0602 12:17:37.884409 3353 log.go:172] (0xc000138840) (0xc0005f0d20) Stream removed, broadcasting: 3\nI0602 12:17:37.884431 3353 log.go:172] (0xc000138840) (0xc0005f0e60) Stream removed, broadcasting: 5\n" Jun 2 12:17:37.889: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 12:17:37.889: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 12:17:37.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 12:17:38.099: INFO: stderr: "I0602 12:17:38.016980 3376 log.go:172] (0xc000830210) (0xc0006bb220) Create stream\nI0602 12:17:38.017041 3376 log.go:172] (0xc000830210) (0xc0006bb220) Stream added, broadcasting: 1\nI0602 12:17:38.019460 3376 log.go:172] (0xc000830210) Reply frame received for 1\nI0602 12:17:38.019497 3376 log.go:172] (0xc000830210) (0xc0006bb2c0) Create stream\nI0602 12:17:38.019505 3376 log.go:172] (0xc000830210) (0xc0006bb2c0) Stream added, broadcasting: 3\nI0602 12:17:38.020613 3376 log.go:172] (0xc000830210) Reply frame received for 3\nI0602 12:17:38.020667 3376 log.go:172] (0xc000830210) (0xc000666000) Create stream\nI0602 12:17:38.020694 3376 log.go:172] (0xc000830210) (0xc000666000) Stream added, broadcasting: 5\nI0602 12:17:38.022277 3376 log.go:172] (0xc000830210) Reply frame received for 5\nI0602 12:17:38.092381 3376 log.go:172] (0xc000830210) Data frame received for 5\nI0602 12:17:38.092421 3376 log.go:172] (0xc000666000) (5) Data frame handling\nI0602 12:17:38.092448 3376 log.go:172] (0xc000830210) Data frame received for 3\nI0602 12:17:38.092459 3376 log.go:172] (0xc0006bb2c0) (3) Data frame handling\nI0602 12:17:38.092469 3376 log.go:172] (0xc0006bb2c0) (3) Data frame sent\nI0602 12:17:38.092478 3376 log.go:172] (0xc000830210) Data frame received for 3\nI0602 12:17:38.092486 3376 log.go:172] (0xc0006bb2c0) (3) Data frame handling\nI0602 12:17:38.094020 3376 log.go:172] (0xc000830210) Data frame received for 1\nI0602 12:17:38.094044 3376 log.go:172] (0xc0006bb220) (1) Data frame handling\nI0602 12:17:38.094056 3376 log.go:172] (0xc0006bb220) (1) Data frame sent\nI0602 12:17:38.094070 3376 log.go:172] (0xc000830210) (0xc0006bb220) Stream removed, broadcasting: 1\nI0602 12:17:38.094145 3376 log.go:172] (0xc000830210) Go away received\nI0602 12:17:38.094215 3376 log.go:172] (0xc000830210) (0xc0006bb220) Stream removed, broadcasting: 1\nI0602 12:17:38.094229 3376 log.go:172] (0xc000830210) (0xc0006bb2c0) Stream removed, broadcasting: 3\nI0602 12:17:38.094236 3376 log.go:172] (0xc000830210) (0xc000666000) Stream removed, broadcasting: 5\n" Jun 2 12:17:38.099: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 12:17:38.099: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 12:17:38.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-knq2s ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 2 12:17:38.379: INFO: stderr: "I0602 12:17:38.278574 3398 log.go:172] (0xc000138840) (0xc00029f2c0) Create stream\nI0602 12:17:38.278639 3398 log.go:172] (0xc000138840) (0xc00029f2c0) Stream added, broadcasting: 1\nI0602 12:17:38.280565 3398 log.go:172] (0xc000138840) Reply frame received for 1\nI0602 12:17:38.280626 3398 log.go:172] (0xc000138840) (0xc000764000) Create stream\nI0602 12:17:38.280647 3398 log.go:172] (0xc000138840) (0xc000764000) Stream added, broadcasting: 3\nI0602 12:17:38.281502 3398 log.go:172] (0xc000138840) Reply frame received for 3\nI0602 12:17:38.281546 3398 log.go:172] (0xc000138840) (0xc000394000) Create stream\nI0602 12:17:38.281570 3398 log.go:172] (0xc000138840) (0xc000394000) Stream added, broadcasting: 5\nI0602 12:17:38.283341 3398 log.go:172] (0xc000138840) Reply frame received for 5\nI0602 12:17:38.372121 3398 log.go:172] (0xc000138840) Data frame received for 5\nI0602 12:17:38.372222 3398 log.go:172] (0xc000394000) (5) Data frame handling\nI0602 12:17:38.372250 3398 log.go:172] (0xc000138840) Data frame received for 3\nI0602 12:17:38.372258 3398 log.go:172] (0xc000764000) (3) Data frame handling\nI0602 12:17:38.372267 3398 log.go:172] (0xc000764000) (3) Data frame sent\nI0602 12:17:38.372289 3398 log.go:172] (0xc000138840) Data frame received for 3\nI0602 12:17:38.372300 3398 log.go:172] (0xc000764000) (3) Data frame handling\nI0602 12:17:38.373824 3398 log.go:172] (0xc000138840) Data frame received for 1\nI0602 12:17:38.373852 3398 log.go:172] (0xc00029f2c0) (1) Data frame handling\nI0602 12:17:38.373864 3398 log.go:172] (0xc00029f2c0) (1) Data frame sent\nI0602 12:17:38.373877 3398 log.go:172] (0xc000138840) (0xc00029f2c0) Stream removed, broadcasting: 1\nI0602 12:17:38.373895 3398 log.go:172] (0xc000138840) Go away received\nI0602 12:17:38.374018 3398 log.go:172] (0xc000138840) (0xc00029f2c0) Stream removed, broadcasting: 1\nI0602 12:17:38.374032 3398 log.go:172] (0xc000138840) (0xc000764000) Stream removed, broadcasting: 3\nI0602 12:17:38.374043 3398 log.go:172] (0xc000138840) (0xc000394000) Stream removed, broadcasting: 5\n" Jun 2 12:17:38.379: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 2 12:17:38.379: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 2 12:17:38.379: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 2 12:18:08.418: INFO: Deleting all statefulset in ns e2e-tests-statefulset-knq2s Jun 2 12:18:08.422: INFO: Scaling statefulset ss to 0 Jun 2 12:18:08.430: INFO: Waiting for statefulset status.replicas updated to 0 Jun 2 12:18:08.431: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:18:08.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-knq2s" for this suite. Jun 2 12:18:14.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:18:14.545: INFO: namespace: e2e-tests-statefulset-knq2s, resource: bindings, ignored listing per whitelist Jun 2 12:18:14.547: INFO: namespace e2e-tests-statefulset-knq2s deletion completed in 6.096344158s • [SLOW TEST:98.665 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:18:14.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2228a782-a4cb-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2228a782-a4cb-11ea-889d-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:18:20.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bfjz9" for this suite. Jun 2 12:18:42.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:18:42.781: INFO: namespace: e2e-tests-projected-bfjz9, resource: bindings, ignored listing per whitelist Jun 2 12:18:42.859: INFO: namespace e2e-tests-projected-bfjz9 deletion completed in 22.114270448s • [SLOW TEST:28.312 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:18:42.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 2 12:18:43.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4jjqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-4jjqm/configmaps/e2e-watch-test-resource-version,UID:33092c8a-a4cb-11ea-99e8-0242ac110002,ResourceVersion:13834266,Generation:0,CreationTimestamp:2020-06-02 12:18:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 2 12:18:43.108: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4jjqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-4jjqm/configmaps/e2e-watch-test-resource-version,UID:33092c8a-a4cb-11ea-99e8-0242ac110002,ResourceVersion:13834267,Generation:0,CreationTimestamp:2020-06-02 12:18:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:18:43.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-4jjqm" for this suite. Jun 2 12:18:49.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:18:49.186: INFO: namespace: e2e-tests-watch-4jjqm, resource: bindings, ignored listing per whitelist Jun 2 12:18:49.198: INFO: namespace e2e-tests-watch-4jjqm deletion completed in 6.079950152s • [SLOW TEST:6.339 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:18:49.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 2 12:18:49.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-nqmck" to be "success or failure" Jun 2 12:18:49.310: INFO: Pod "downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.198551ms Jun 2 12:18:51.315: INFO: Pod "downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026442116s Jun 2 12:18:53.320: INFO: Pod "downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030929568s STEP: Saw pod success Jun 2 12:18:53.320: INFO: Pod "downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:18:53.323: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018 container client-container: STEP: delete the pod Jun 2 12:18:53.342: INFO: Waiting for pod downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018 to disappear Jun 2 12:18:53.359: INFO: Pod downwardapi-volume-36c8fc17-a4cb-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:18:53.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nqmck" for this suite. Jun 2 12:18:59.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:18:59.495: INFO: namespace: e2e-tests-projected-nqmck, resource: bindings, ignored listing per whitelist Jun 2 12:18:59.504: INFO: namespace e2e-tests-projected-nqmck deletion completed in 6.141652988s • [SLOW TEST:10.306 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:18:59.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jun 2 12:18:59.648: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 2 12:18:59.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:18:59.949: INFO: stderr: "" Jun 2 12:18:59.949: INFO: stdout: "service/redis-slave created\n" Jun 2 12:18:59.949: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 2 12:18:59.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:00.226: INFO: stderr: "" Jun 2 12:19:00.226: INFO: stdout: "service/redis-master created\n" Jun 2 12:19:00.226: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 2 12:19:00.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:00.564: INFO: stderr: "" Jun 2 12:19:00.564: INFO: stdout: "service/frontend created\n" Jun 2 12:19:00.564: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 2 12:19:00.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:00.796: INFO: stderr: "" Jun 2 12:19:00.797: INFO: stdout: "deployment.extensions/frontend created\n" Jun 2 12:19:00.797: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 2 12:19:00.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:01.124: INFO: stderr: "" Jun 2 12:19:01.124: INFO: stdout: "deployment.extensions/redis-master created\n" Jun 2 12:19:01.125: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 2 12:19:01.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:01.420: INFO: stderr: "" Jun 2 12:19:01.420: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jun 2 12:19:01.420: INFO: Waiting for all frontend pods to be Running. Jun 2 12:19:11.470: INFO: Waiting for frontend to serve content. Jun 2 12:19:11.554: INFO: Trying to add a new entry to the guestbook. Jun 2 12:19:11.601: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 2 12:19:11.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:11.829: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:19:11.829: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 2 12:19:11.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:12.056: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:19:12.056: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 2 12:19:12.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:12.205: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:19:12.205: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 2 12:19:12.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:12.317: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:19:12.317: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 2 12:19:12.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:12.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:19:12.427: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 2 12:19:12.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqslq' Jun 2 12:19:12.932: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 2 12:19:12.932: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:19:12.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wqslq" for this suite. Jun 2 12:19:53.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:19:53.095: INFO: namespace: e2e-tests-kubectl-wqslq, resource: bindings, ignored listing per whitelist Jun 2 12:19:53.161: INFO: namespace e2e-tests-kubectl-wqslq deletion completed in 40.166626138s • [SLOW TEST:53.657 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:19:53.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-qgfd8;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-qgfd8;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qgfd8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 121.97.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.97.121_udp@PTR;check="$$(dig +tcp +noall +answer +search 121.97.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.97.121_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-qgfd8;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-qgfd8.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-qgfd8.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-qgfd8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 121.97.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.97.121_udp@PTR;check="$$(dig +tcp +noall +answer +search 121.97.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.97.121_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 2 12:19:59.406: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.409: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.459: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.477: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.479: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.481: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.484: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.486: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.488: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.491: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:19:59.509: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:04.513: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.516: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.530: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.550: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.553: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.555: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.558: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.561: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.565: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.568: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.570: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:04.588: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:09.543: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.547: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.562: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.587: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.589: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.592: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.594: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.597: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.599: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.602: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.605: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:09.622: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:14.514: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.518: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.535: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.558: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.562: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.565: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.568: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.571: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.574: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.576: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.579: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:14.596: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:19.515: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.518: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.564: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.586: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.590: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.593: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.596: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.598: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.601: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:19.624: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:24.514: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.517: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.533: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.561: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.564: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.568: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.571: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.575: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.579: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.582: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:24.606: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-qgfd8 jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8 jessie_udp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@dns-test-service.e2e-tests-dns-qgfd8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:29.596: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc from pod e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018: the server could not find the requested resource (get pods dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018) Jun 2 12:20:29.611: INFO: Lookups using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 failed for: [jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-qgfd8.svc] Jun 2 12:20:34.594: INFO: DNS probes using e2e-tests-dns-qgfd8/dns-test-5cf7c56c-a4cb-11ea-889d-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:20:34.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-qgfd8" for this suite. Jun 2 12:20:40.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:20:40.951: INFO: namespace: e2e-tests-dns-qgfd8, resource: bindings, ignored listing per whitelist Jun 2 12:20:41.019: INFO: namespace e2e-tests-dns-qgfd8 deletion completed in 6.201577391s • [SLOW TEST:47.857 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:20:41.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 2 12:20:45.141: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-797387e1-a4cb-11ea-889d-0242ac110018,GenerateName:,Namespace:e2e-tests-events-wqknp,SelfLink:/api/v1/namespaces/e2e-tests-events-wqknp/pods/send-events-797387e1-a4cb-11ea-889d-0242ac110018,UID:7974e29a-a4cb-11ea-99e8-0242ac110002,ResourceVersion:13834775,Generation:0,CreationTimestamp:2020-06-02 12:20:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 115240686,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5rdcf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5rdcf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5rdcf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ea4bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ea4bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:20:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:20:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:20:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:20:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.33,StartTime:2020-06-02 12:20:41 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-02 12:20:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://0a02ab5965b602a30659ca81b59dc9b9ac365b433c77c33fe2da9f6ec27e3bd9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 2 12:20:47.146: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 2 12:20:49.150: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:20:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-wqknp" for this suite. Jun 2 12:21:27.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:21:27.251: INFO: namespace: e2e-tests-events-wqknp, resource: bindings, ignored listing per whitelist Jun 2 12:21:27.282: INFO: namespace e2e-tests-events-wqknp deletion completed in 38.096240442s • [SLOW TEST:46.263 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:21:27.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9507b356-a4cb-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 12:21:27.408: INFO: Waiting up to 5m0s for pod "pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-xm8k8" to be "success or failure" Jun 2 12:21:27.412: INFO: Pod "pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129463ms Jun 2 12:21:29.417: INFO: Pod "pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009020767s Jun 2 12:21:31.421: INFO: Pod "pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.013261644s Jun 2 12:21:33.425: INFO: Pod "pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017155687s STEP: Saw pod success Jun 2 12:21:33.425: INFO: Pod "pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:21:33.428: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 12:21:33.450: INFO: Waiting for pod pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018 to disappear Jun 2 12:21:33.454: INFO: Pod pod-configmaps-950a2b2f-a4cb-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:21:33.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xm8k8" for this suite. Jun 2 12:21:39.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:21:39.488: INFO: namespace: e2e-tests-configmap-xm8k8, resource: bindings, ignored listing per whitelist Jun 2 12:21:39.604: INFO: namespace e2e-tests-configmap-xm8k8 deletion completed in 6.146337605s • [SLOW TEST:12.322 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:21:39.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 2 12:21:39.737: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 2 12:21:39.757: INFO: Waiting for terminating namespaces to be deleted... Jun 2 12:21:39.759: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 2 12:21:39.766: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 2 12:21:39.766: INFO: Container coredns ready: true, restart count 0 Jun 2 12:21:39.766: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 2 12:21:39.766: INFO: Container kube-proxy ready: true, restart count 0 Jun 2 12:21:39.766: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 12:21:39.766: INFO: Container kindnet-cni ready: true, restart count 0 Jun 2 12:21:39.766: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 2 12:21:39.772: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 12:21:39.772: INFO: Container kube-proxy ready: true, restart count 0 Jun 2 12:21:39.772: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 2 12:21:39.772: INFO: Container kindnet-cni ready: true, restart count 0 Jun 2 12:21:39.772: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 2 12:21:39.772: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jun 2 12:21:39.846: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Jun 2 12:21:39.846: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Jun 2 12:21:39.846: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Jun 2 12:21:39.846: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Jun 2 12:21:39.846: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Jun 2 12:21:39.846: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9c753439-a4cb-11ea-889d-0242ac110018.1614b97981930fa0], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-sxjhc/filler-pod-9c753439-a4cb-11ea-889d-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c753439-a4cb-11ea-889d-0242ac110018.1614b979cd5c16d4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c753439-a4cb-11ea-889d-0242ac110018.1614b97a3125ff45], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c753439-a4cb-11ea-889d-0242ac110018.1614b97a458c05f9], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c79caff-a4cb-11ea-889d-0242ac110018.1614b9798342da33], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-sxjhc/filler-pod-9c79caff-a4cb-11ea-889d-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c79caff-a4cb-11ea-889d-0242ac110018.1614b97a0a58b732], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c79caff-a4cb-11ea-889d-0242ac110018.1614b97a49b37f3c], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-9c79caff-a4cb-11ea-889d-0242ac110018.1614b97a58318b5e], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1614b97a72d1fffe], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:21:45.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-sxjhc" for this suite. Jun 2 12:21:53.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:21:53.130: INFO: namespace: e2e-tests-sched-pred-sxjhc, resource: bindings, ignored listing per whitelist Jun 2 12:21:53.193: INFO: namespace e2e-tests-sched-pred-sxjhc deletion completed in 8.096665793s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.589 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:21:53.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 2 12:21:53.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-vtjqd' Jun 2 12:21:55.617: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 2 12:21:55.617: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jun 2 12:21:55.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-vtjqd' Jun 2 12:21:55.785: INFO: stderr: "" Jun 2 12:21:55.786: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:21:55.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vtjqd" for this suite. Jun 2 12:22:17.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:22:17.897: INFO: namespace: e2e-tests-kubectl-vtjqd, resource: bindings, ignored listing per whitelist Jun 2 12:22:17.915: INFO: namespace e2e-tests-kubectl-vtjqd deletion completed in 22.100317975s • [SLOW TEST:24.721 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:22:17.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b33b7e14-a4cb-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 12:22:18.088: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-knp9m" to be "success or failure" Jun 2 12:22:18.132: INFO: Pod "pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.886463ms Jun 2 12:22:20.136: INFO: Pod "pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048105836s Jun 2 12:22:22.141: INFO: Pod "pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053171273s STEP: Saw pod success Jun 2 12:22:22.141: INFO: Pod "pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:22:22.144: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 2 12:22:22.170: INFO: Waiting for pod pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018 to disappear Jun 2 12:22:22.318: INFO: Pod pod-projected-configmaps-b33f1999-a4cb-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:22:22.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-knp9m" for this suite. Jun 2 12:22:28.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:22:28.417: INFO: namespace: e2e-tests-projected-knp9m, resource: bindings, ignored listing per whitelist Jun 2 12:22:28.425: INFO: namespace e2e-tests-projected-knp9m deletion completed in 6.103178191s • [SLOW TEST:10.510 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:22:28.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 2 12:22:28.553: INFO: Waiting up to 5m0s for pod "pod-b97a7cd4-a4cb-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-8llfh" to be "success or failure" Jun 2 12:22:28.563: INFO: Pod "pod-b97a7cd4-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.691706ms Jun 2 12:22:30.566: INFO: Pod "pod-b97a7cd4-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013295891s Jun 2 12:22:32.570: INFO: Pod "pod-b97a7cd4-a4cb-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017182254s STEP: Saw pod success Jun 2 12:22:32.570: INFO: Pod "pod-b97a7cd4-a4cb-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:22:32.573: INFO: Trying to get logs from node hunter-worker pod pod-b97a7cd4-a4cb-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:22:32.594: INFO: Waiting for pod pod-b97a7cd4-a4cb-11ea-889d-0242ac110018 to disappear Jun 2 12:22:32.599: INFO: Pod pod-b97a7cd4-a4cb-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:22:32.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8llfh" for this suite. Jun 2 12:22:38.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:22:38.684: INFO: namespace: e2e-tests-emptydir-8llfh, resource: bindings, ignored listing per whitelist Jun 2 12:22:38.694: INFO: namespace e2e-tests-emptydir-8llfh deletion completed in 6.09153131s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:22:38.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-bf9a7c7c-a4cb-11ea-889d-0242ac110018 STEP: Creating a pod to test consume secrets Jun 2 12:22:38.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018" in namespace "e2e-tests-projected-twnwk" to be "success or failure" Jun 2 12:22:38.842: INFO: Pod "pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.389646ms Jun 2 12:22:40.869: INFO: Pod "pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038730446s Jun 2 12:22:42.940: INFO: Pod "pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110384333s STEP: Saw pod success Jun 2 12:22:42.940: INFO: Pod "pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:22:42.944: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 2 12:22:42.973: INFO: Waiting for pod pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018 to disappear Jun 2 12:22:43.079: INFO: Pod pod-projected-secrets-bf9b190d-a4cb-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:22:43.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-twnwk" for this suite. Jun 2 12:22:49.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:22:49.105: INFO: namespace: e2e-tests-projected-twnwk, resource: bindings, ignored listing per whitelist Jun 2 12:22:49.173: INFO: namespace e2e-tests-projected-twnwk deletion completed in 6.090396797s • [SLOW TEST:10.479 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:22:49.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jun 2 12:22:53.339: INFO: Pod pod-hostip-c5d94370-a4cb-11ea-889d-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:22:53.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9769m" for this suite. Jun 2 12:23:15.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:23:15.396: INFO: namespace: e2e-tests-pods-9769m, resource: bindings, ignored listing per whitelist Jun 2 12:23:15.434: INFO: namespace e2e-tests-pods-9769m deletion completed in 22.091111652s • [SLOW TEST:26.261 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:23:15.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 2 12:23:22.720: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:23:23.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-zttgx" for this suite. Jun 2 12:23:47.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:23:47.910: INFO: namespace: e2e-tests-replicaset-zttgx, resource: bindings, ignored listing per whitelist Jun 2 12:23:47.926: INFO: namespace e2e-tests-replicaset-zttgx deletion completed in 24.120217396s • [SLOW TEST:32.491 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:23:47.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-e8e1a036-a4cb-11ea-889d-0242ac110018 STEP: Creating secret with name s-test-opt-upd-e8e1a09e-a4cb-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e8e1a036-a4cb-11ea-889d-0242ac110018 STEP: Updating secret s-test-opt-upd-e8e1a09e-a4cb-11ea-889d-0242ac110018 STEP: Creating secret with name s-test-opt-create-e8e1a0b4-a4cb-11ea-889d-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:23:56.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hbjr4" for this suite. Jun 2 12:24:20.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:24:20.272: INFO: namespace: e2e-tests-projected-hbjr4, resource: bindings, ignored listing per whitelist Jun 2 12:24:20.280: INFO: namespace e2e-tests-projected-hbjr4 deletion completed in 24.091641949s • [SLOW TEST:32.354 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:24:20.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 2 12:24:24.947: INFO: Successfully updated pod "annotationupdatefc287166-a4cb-11ea-889d-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:24:26.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cxj9s" for this suite. Jun 2 12:24:48.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:24:49.036: INFO: namespace: e2e-tests-projected-cxj9s, resource: bindings, ignored listing per whitelist Jun 2 12:24:49.066: INFO: namespace e2e-tests-projected-cxj9s deletion completed in 22.095431214s • [SLOW TEST:28.786 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:24:49.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jun 2 12:24:49.192: INFO: Waiting up to 5m0s for pod "pod-0d4e6a94-a4cc-11ea-889d-0242ac110018" in namespace "e2e-tests-emptydir-kc7kf" to be "success or failure" Jun 2 12:24:49.196: INFO: Pod "pod-0d4e6a94-a4cc-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131617ms Jun 2 12:24:51.200: INFO: Pod "pod-0d4e6a94-a4cc-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00807489s Jun 2 12:24:53.204: INFO: Pod "pod-0d4e6a94-a4cc-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011941203s STEP: Saw pod success Jun 2 12:24:53.204: INFO: Pod "pod-0d4e6a94-a4cc-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:24:53.207: INFO: Trying to get logs from node hunter-worker2 pod pod-0d4e6a94-a4cc-11ea-889d-0242ac110018 container test-container: STEP: delete the pod Jun 2 12:24:53.278: INFO: Waiting for pod pod-0d4e6a94-a4cc-11ea-889d-0242ac110018 to disappear Jun 2 12:24:53.386: INFO: Pod pod-0d4e6a94-a4cc-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:24:53.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kc7kf" for this suite. Jun 2 12:24:59.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:24:59.479: INFO: namespace: e2e-tests-emptydir-kc7kf, resource: bindings, ignored listing per whitelist Jun 2 12:24:59.527: INFO: namespace e2e-tests-emptydir-kc7kf deletion completed in 6.137406762s • [SLOW TEST:10.461 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:24:59.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 2 12:24:59.622: INFO: namespace e2e-tests-kubectl-mgbbg Jun 2 12:24:59.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mgbbg' Jun 2 12:24:59.997: INFO: stderr: "" Jun 2 12:24:59.997: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 2 12:25:01.002: INFO: Selector matched 1 pods for map[app:redis] Jun 2 12:25:01.002: INFO: Found 0 / 1 Jun 2 12:25:02.002: INFO: Selector matched 1 pods for map[app:redis] Jun 2 12:25:02.002: INFO: Found 0 / 1 Jun 2 12:25:03.028: INFO: Selector matched 1 pods for map[app:redis] Jun 2 12:25:03.028: INFO: Found 0 / 1 Jun 2 12:25:04.002: INFO: Selector matched 1 pods for map[app:redis] Jun 2 12:25:04.002: INFO: Found 1 / 1 Jun 2 12:25:04.002: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 2 12:25:04.006: INFO: Selector matched 1 pods for map[app:redis] Jun 2 12:25:04.006: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 2 12:25:04.006: INFO: wait on redis-master startup in e2e-tests-kubectl-mgbbg Jun 2 12:25:04.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6xfs4 redis-master --namespace=e2e-tests-kubectl-mgbbg' Jun 2 12:25:04.119: INFO: stderr: "" Jun 2 12:25:04.120: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 Jun 12:25:02.977 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Jun 12:25:02.977 # Server started, Redis version 3.2.12\n1:M 02 Jun 12:25:02.977 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Jun 12:25:02.977 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 2 12:25:04.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-mgbbg' Jun 2 12:25:04.296: INFO: stderr: "" Jun 2 12:25:04.296: INFO: stdout: "service/rm2 exposed\n" Jun 2 12:25:04.318: INFO: Service rm2 in namespace e2e-tests-kubectl-mgbbg found. STEP: exposing service Jun 2 12:25:06.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-mgbbg' Jun 2 12:25:06.464: INFO: stderr: "" Jun 2 12:25:06.464: INFO: stdout: "service/rm3 exposed\n" Jun 2 12:25:06.494: INFO: Service rm3 in namespace e2e-tests-kubectl-mgbbg found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:25:08.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mgbbg" for this suite. Jun 2 12:25:32.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:25:32.627: INFO: namespace: e2e-tests-kubectl-mgbbg, resource: bindings, ignored listing per whitelist Jun 2 12:25:32.637: INFO: namespace e2e-tests-kubectl-mgbbg deletion completed in 24.121396952s • [SLOW TEST:33.109 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:25:32.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-llmbh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-llmbh to expose endpoints map[] Jun 2 12:25:32.893: INFO: Get endpoints failed (14.799918ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 2 12:25:33.897: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-llmbh exposes endpoints map[] (1.018892096s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-llmbh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-llmbh to expose endpoints map[pod1:[100]] Jun 2 12:25:37.001: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-llmbh exposes endpoints map[pod1:[100]] (3.096067236s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-llmbh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-llmbh to expose endpoints map[pod2:[101] pod1:[100]] Jun 2 12:25:41.133: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-llmbh exposes endpoints map[pod1:[100] pod2:[101]] (4.128846421s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-llmbh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-llmbh to expose endpoints map[pod2:[101]] Jun 2 12:25:42.158: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-llmbh exposes endpoints map[pod2:[101]] (1.019488079s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-llmbh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-llmbh to expose endpoints map[] Jun 2 12:25:43.173: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-llmbh exposes endpoints map[] (1.011310896s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:25:43.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-llmbh" for this suite. Jun 2 12:26:05.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:26:05.371: INFO: namespace: e2e-tests-services-llmbh, resource: bindings, ignored listing per whitelist Jun 2 12:26:05.403: INFO: namespace e2e-tests-services-llmbh deletion completed in 22.095012509s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.765 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:26:05.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 12:26:05.529: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 2 12:26:10.534: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 2 12:26:10.534: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 2 12:26:12.539: INFO: Creating deployment "test-rollover-deployment" Jun 2 12:26:12.553: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 2 12:26:14.560: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 2 12:26:14.567: INFO: Ensure that both replica sets have 1 created replica Jun 2 12:26:14.574: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 2 12:26:14.580: INFO: Updating deployment test-rollover-deployment Jun 2 12:26:14.580: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 2 12:26:16.593: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 2 12:26:16.599: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 2 12:26:16.606: INFO: all replica sets need to contain the pod-template-hash label Jun 2 12:26:16.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697574, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 12:26:18.615: INFO: all replica sets need to contain the pod-template-hash label Jun 2 12:26:18.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697578, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 12:26:20.615: INFO: all replica sets need to contain the pod-template-hash label Jun 2 12:26:20.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697578, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 12:26:22.614: INFO: all replica sets need to contain the pod-template-hash label Jun 2 12:26:22.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697578, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 12:26:24.614: INFO: all replica sets need to contain the pod-template-hash label Jun 2 12:26:24.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697578, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 12:26:26.615: INFO: all replica sets need to contain the pod-template-hash label Jun 2 12:26:26.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697578, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726697572, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 2 12:26:28.614: INFO: Jun 2 12:26:28.614: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 2 12:26:28.622: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-xb8b9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb8b9/deployments/test-rollover-deployment,UID:3efef7e7-a4cc-11ea-99e8-0242ac110002,ResourceVersion:13835970,Generation:2,CreationTimestamp:2020-06-02 12:26:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-02 12:26:12 +0000 UTC 2020-06-02 12:26:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-02 12:26:28 +0000 UTC 2020-06-02 12:26:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 2 12:26:28.625: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-xb8b9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb8b9/replicasets/test-rollover-deployment-5b8479fdb6,UID:4036a007-a4cc-11ea-99e8-0242ac110002,ResourceVersion:13835961,Generation:2,CreationTimestamp:2020-06-02 12:26:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3efef7e7-a4cc-11ea-99e8-0242ac110002 0xc002639f27 0xc002639f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 2 12:26:28.625: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 2 12:26:28.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-xb8b9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb8b9/replicasets/test-rollover-controller,UID:3aca48dc-a4cc-11ea-99e8-0242ac110002,ResourceVersion:13835969,Generation:2,CreationTimestamp:2020-06-02 12:26:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3efef7e7-a4cc-11ea-99e8-0242ac110002 0xc002639d97 0xc002639d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 2 12:26:28.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-xb8b9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xb8b9/replicasets/test-rollover-deployment-58494b7559,UID:3f028dab-a4cc-11ea-99e8-0242ac110002,ResourceVersion:13835922,Generation:2,CreationTimestamp:2020-06-02 12:26:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3efef7e7-a4cc-11ea-99e8-0242ac110002 0xc002639e57 0xc002639e58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 2 12:26:28.628: INFO: Pod "test-rollover-deployment-5b8479fdb6-pj4qs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-pj4qs,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-xb8b9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xb8b9/pods/test-rollover-deployment-5b8479fdb6-pj4qs,UID:4043eada-a4cc-11ea-99e8-0242ac110002,ResourceVersion:13835939,Generation:0,CreationTimestamp:2020-06-02 12:26:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 4036a007-a4cc-11ea-99e8-0242ac110002 0xc0025076e7 0xc0025076e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gknct {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gknct,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gknct true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025078b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025078d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:26:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:26:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:26:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-02 12:26:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.107,StartTime:2020-06-02 12:26:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-02 12:26:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://0e8bc97b6247f4ba8283179a9a961ba75d05c7bceb98f28d3a9c29cfdec74451}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:26:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xb8b9" for this suite. Jun 2 12:26:36.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:26:36.674: INFO: namespace: e2e-tests-deployment-xb8b9, resource: bindings, ignored listing per whitelist Jun 2 12:26:36.744: INFO: namespace e2e-tests-deployment-xb8b9 deletion completed in 8.112749176s • [SLOW TEST:31.341 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:26:36.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0602 12:26:38.055373 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 2 12:26:38.055: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:26:38.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-4mkzs" for this suite. Jun 2 12:26:44.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:26:44.131: INFO: namespace: e2e-tests-gc-4mkzs, resource: bindings, ignored listing per whitelist Jun 2 12:26:44.150: INFO: namespace e2e-tests-gc-4mkzs deletion completed in 6.092293868s • [SLOW TEST:7.406 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:26:44.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-51e9373f-a4cc-11ea-889d-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 2 12:26:44.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018" in namespace "e2e-tests-configmap-xm7qc" to be "success or failure" Jun 2 12:26:44.309: INFO: Pod "pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566957ms Jun 2 12:26:46.314: INFO: Pod "pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007250951s Jun 2 12:26:48.318: INFO: Pod "pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011472656s STEP: Saw pod success Jun 2 12:26:48.318: INFO: Pod "pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018" satisfied condition "success or failure" Jun 2 12:26:48.321: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 2 12:26:48.357: INFO: Waiting for pod pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018 to disappear Jun 2 12:26:48.363: INFO: Pod pod-configmaps-51ed325d-a4cc-11ea-889d-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:26:48.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xm7qc" for this suite. Jun 2 12:26:54.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:26:54.436: INFO: namespace: e2e-tests-configmap-xm7qc, resource: bindings, ignored listing per whitelist Jun 2 12:26:54.498: INFO: namespace e2e-tests-configmap-xm7qc deletion completed in 6.103240517s • [SLOW TEST:10.347 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:26:54.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 12:26:54.639: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.945483ms) Jun 2 12:26:54.642: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.323409ms) Jun 2 12:26:54.663: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 20.466136ms) Jun 2 12:26:54.666: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.20838ms) Jun 2 12:26:54.670: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.652793ms) Jun 2 12:26:54.673: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.705524ms) Jun 2 12:26:54.678: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.370985ms) Jun 2 12:26:54.681: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.181891ms) Jun 2 12:26:54.684: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.887976ms) Jun 2 12:26:54.687: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.128856ms) Jun 2 12:26:54.690: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.942093ms) Jun 2 12:26:54.693: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.066423ms) Jun 2 12:26:54.696: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.276304ms) Jun 2 12:26:54.699: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.932472ms) Jun 2 12:26:54.703: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.559194ms) Jun 2 12:26:54.707: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.541965ms) Jun 2 12:26:54.710: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.69245ms) Jun 2 12:26:54.714: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.44258ms) Jun 2 12:26:54.717: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.382479ms) Jun 2 12:26:54.720: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.177843ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:26:54.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-tdqt8" for this suite. Jun 2 12:27:00.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:27:00.832: INFO: namespace: e2e-tests-proxy-tdqt8, resource: bindings, ignored listing per whitelist Jun 2 12:27:00.839: INFO: namespace e2e-tests-proxy-tdqt8 deletion completed in 6.114854182s • [SLOW TEST:6.341 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:27:00.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:27:07.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9gp52" for this suite. Jun 2 12:27:13.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:27:13.328: INFO: namespace: e2e-tests-namespaces-9gp52, resource: bindings, ignored listing per whitelist Jun 2 12:27:13.341: INFO: namespace e2e-tests-namespaces-9gp52 deletion completed in 6.121484241s STEP: Destroying namespace "e2e-tests-nsdeletetest-xxkpm" for this suite. Jun 2 12:27:13.343: INFO: Namespace e2e-tests-nsdeletetest-xxkpm was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-n5mxh" for this suite. Jun 2 12:27:19.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:27:19.396: INFO: namespace: e2e-tests-nsdeletetest-n5mxh, resource: bindings, ignored listing per whitelist Jun 2 12:27:19.437: INFO: namespace e2e-tests-nsdeletetest-n5mxh deletion completed in 6.094542708s • [SLOW TEST:18.599 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:27:19.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 2 12:27:19.601: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:27:20.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-7ljg7" for this suite. Jun 2 12:27:26.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:27:26.693: INFO: namespace: e2e-tests-custom-resource-definition-7ljg7, resource: bindings, ignored listing per whitelist Jun 2 12:27:26.751: INFO: namespace e2e-tests-custom-resource-definition-7ljg7 deletion completed in 6.093649401s • [SLOW TEST:7.314 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:27:26.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-6b510129-a4cc-11ea-889d-0242ac110018 STEP: Creating secret with name s-test-opt-upd-6b51019b-a4cc-11ea-889d-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6b510129-a4cc-11ea-889d-0242ac110018 STEP: Updating secret s-test-opt-upd-6b51019b-a4cc-11ea-889d-0242ac110018 STEP: Creating secret with name s-test-opt-create-6b5101c9-a4cc-11ea-889d-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:27:35.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tdqq2" for this suite. Jun 2 12:27:59.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:27:59.076: INFO: namespace: e2e-tests-secrets-tdqq2, resource: bindings, ignored listing per whitelist Jun 2 12:27:59.130: INFO: namespace e2e-tests-secrets-tdqq2 deletion completed in 24.091270338s • [SLOW TEST:32.378 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:27:59.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j8wms Jun 2 12:28:03.303: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j8wms STEP: checking the pod's current state and verifying that restartCount is present Jun 2 12:28:03.305: INFO: Initial restart count of pod liveness-exec is 0 Jun 2 12:28:53.442: INFO: Restart count of pod e2e-tests-container-probe-j8wms/liveness-exec is now 1 (50.136901956s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:28:53.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-j8wms" for this suite. Jun 2 12:28:59.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:28:59.543: INFO: namespace: e2e-tests-container-probe-j8wms, resource: bindings, ignored listing per whitelist Jun 2 12:28:59.612: INFO: namespace e2e-tests-container-probe-j8wms deletion completed in 6.093938091s • [SLOW TEST:60.482 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:28:59.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jun 2 12:28:59.718: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix056671247/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:28:59.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-q2bbq" for this suite. Jun 2 12:29:05.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:29:05.886: INFO: namespace: e2e-tests-kubectl-q2bbq, resource: bindings, ignored listing per whitelist Jun 2 12:29:05.927: INFO: namespace e2e-tests-kubectl-q2bbq deletion completed in 6.137033363s • [SLOW TEST:6.316 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:29:05.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jun 2 12:29:06.080: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-t4nzq" to be "success or failure" Jun 2 12:29:06.092: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.740595ms Jun 2 12:29:08.096: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015475732s Jun 2 12:29:10.100: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020052684s Jun 2 12:29:12.105: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02481137s STEP: Saw pod success Jun 2 12:29:12.105: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 2 12:29:12.108: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 2 12:29:12.134: INFO: Waiting for pod pod-host-path-test to disappear Jun 2 12:29:12.157: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:29:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-t4nzq" for this suite. Jun 2 12:29:18.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:29:18.240: INFO: namespace: e2e-tests-hostpath-t4nzq, resource: bindings, ignored listing per whitelist Jun 2 12:29:18.284: INFO: namespace e2e-tests-hostpath-t4nzq deletion completed in 6.122841461s • [SLOW TEST:12.356 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:29:18.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 2 12:29:26.444: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:26.487: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:28.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:28.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:30.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:30.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:32.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:32.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:34.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:34.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:36.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:36.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:38.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:38.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:40.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:40.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:42.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:42.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:44.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:44.502: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:46.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:46.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:48.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:48.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:50.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:50.492: INFO: Pod pod-with-prestop-exec-hook still exists Jun 2 12:29:52.487: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 2 12:29:52.490: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:29:52.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-c98lz" for this suite. Jun 2 12:30:14.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:30:14.601: INFO: namespace: e2e-tests-container-lifecycle-hook-c98lz, resource: bindings, ignored listing per whitelist Jun 2 12:30:14.624: INFO: namespace e2e-tests-container-lifecycle-hook-c98lz deletion completed in 22.125919869s • [SLOW TEST:56.340 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:30:14.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mnpww STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 2 12:30:14.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 2 12:30:42.842: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.48 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mnpww PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 12:30:42.842: INFO: >>> kubeConfig: /root/.kube/config I0602 12:30:42.874014 6 log.go:172] (0xc0029102c0) (0xc001da9b80) Create stream I0602 12:30:42.874042 6 log.go:172] (0xc0029102c0) (0xc001da9b80) Stream added, broadcasting: 1 I0602 12:30:42.876295 6 log.go:172] (0xc0029102c0) Reply frame received for 1 I0602 12:30:42.876325 6 log.go:172] (0xc0029102c0) (0xc001edcbe0) Create stream I0602 12:30:42.876337 6 log.go:172] (0xc0029102c0) (0xc001edcbe0) Stream added, broadcasting: 3 I0602 12:30:42.877470 6 log.go:172] (0xc0029102c0) Reply frame received for 3 I0602 12:30:42.877497 6 log.go:172] (0xc0029102c0) (0xc001da9c20) Create stream I0602 12:30:42.877506 6 log.go:172] (0xc0029102c0) (0xc001da9c20) Stream added, broadcasting: 5 I0602 12:30:42.878458 6 log.go:172] (0xc0029102c0) Reply frame received for 5 I0602 12:30:43.989435 6 log.go:172] (0xc0029102c0) Data frame received for 3 I0602 12:30:43.989542 6 log.go:172] (0xc001edcbe0) (3) Data frame handling I0602 12:30:43.989582 6 log.go:172] (0xc001edcbe0) (3) Data frame sent I0602 12:30:43.989618 6 log.go:172] (0xc0029102c0) Data frame received for 5 I0602 12:30:43.989648 6 log.go:172] (0xc001da9c20) (5) Data frame handling I0602 12:30:43.990157 6 log.go:172] (0xc0029102c0) Data frame received for 3 I0602 12:30:43.990182 6 log.go:172] (0xc001edcbe0) (3) Data frame handling I0602 12:30:43.992452 6 log.go:172] (0xc0029102c0) Data frame received for 1 I0602 12:30:43.992485 6 log.go:172] (0xc001da9b80) (1) Data frame handling I0602 12:30:43.992507 6 log.go:172] (0xc001da9b80) (1) Data frame sent I0602 12:30:43.992547 6 log.go:172] (0xc0029102c0) (0xc001da9b80) Stream removed, broadcasting: 1 I0602 12:30:43.992573 6 log.go:172] (0xc0029102c0) Go away received I0602 12:30:43.992730 6 log.go:172] (0xc0029102c0) (0xc001da9b80) Stream removed, broadcasting: 1 I0602 12:30:43.992765 6 log.go:172] (0xc0029102c0) (0xc001edcbe0) Stream removed, broadcasting: 3 I0602 12:30:43.992791 6 log.go:172] (0xc0029102c0) (0xc001da9c20) Stream removed, broadcasting: 5 Jun 2 12:30:43.992: INFO: Found all expected endpoints: [netserver-0] Jun 2 12:30:43.996: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.112 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mnpww PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 2 12:30:43.996: INFO: >>> kubeConfig: /root/.kube/config I0602 12:30:44.032341 6 log.go:172] (0xc002a9c2c0) (0xc0027bc460) Create stream I0602 12:30:44.032363 6 log.go:172] (0xc002a9c2c0) (0xc0027bc460) Stream added, broadcasting: 1 I0602 12:30:44.036333 6 log.go:172] (0xc002a9c2c0) Reply frame received for 1 I0602 12:30:44.036403 6 log.go:172] (0xc002a9c2c0) (0xc001fe7540) Create stream I0602 12:30:44.036431 6 log.go:172] (0xc002a9c2c0) (0xc001fe7540) Stream added, broadcasting: 3 I0602 12:30:44.037734 6 log.go:172] (0xc002a9c2c0) Reply frame received for 3 I0602 12:30:44.037769 6 log.go:172] (0xc002a9c2c0) (0xc001da9cc0) Create stream I0602 12:30:44.037781 6 log.go:172] (0xc002a9c2c0) (0xc001da9cc0) Stream added, broadcasting: 5 I0602 12:30:44.038704 6 log.go:172] (0xc002a9c2c0) Reply frame received for 5 I0602 12:30:45.129720 6 log.go:172] (0xc002a9c2c0) Data frame received for 3 I0602 12:30:45.129771 6 log.go:172] (0xc001fe7540) (3) Data frame handling I0602 12:30:45.129799 6 log.go:172] (0xc001fe7540) (3) Data frame sent I0602 12:30:45.129891 6 log.go:172] (0xc002a9c2c0) Data frame received for 5 I0602 12:30:45.129921 6 log.go:172] (0xc001da9cc0) (5) Data frame handling I0602 12:30:45.129949 6 log.go:172] (0xc002a9c2c0) Data frame received for 3 I0602 12:30:45.129964 6 log.go:172] (0xc001fe7540) (3) Data frame handling I0602 12:30:45.132309 6 log.go:172] (0xc002a9c2c0) Data frame received for 1 I0602 12:30:45.132392 6 log.go:172] (0xc0027bc460) (1) Data frame handling I0602 12:30:45.132460 6 log.go:172] (0xc0027bc460) (1) Data frame sent I0602 12:30:45.132487 6 log.go:172] (0xc002a9c2c0) (0xc0027bc460) Stream removed, broadcasting: 1 I0602 12:30:45.132503 6 log.go:172] (0xc002a9c2c0) Go away received I0602 12:30:45.132642 6 log.go:172] (0xc002a9c2c0) (0xc0027bc460) Stream removed, broadcasting: 1 I0602 12:30:45.132666 6 log.go:172] (0xc002a9c2c0) (0xc001fe7540) Stream removed, broadcasting: 3 I0602 12:30:45.132678 6 log.go:172] (0xc002a9c2c0) (0xc001da9cc0) Stream removed, broadcasting: 5 Jun 2 12:30:45.132: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:30:45.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-mnpww" for this suite. Jun 2 12:31:09.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:31:09.166: INFO: namespace: e2e-tests-pod-network-test-mnpww, resource: bindings, ignored listing per whitelist Jun 2 12:31:09.218: INFO: namespace e2e-tests-pod-network-test-mnpww deletion completed in 24.081041169s • [SLOW TEST:54.594 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 2 12:31:09.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 2 12:31:09.381: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:09.383: INFO: Number of nodes with available pods: 0 Jun 2 12:31:09.383: INFO: Node hunter-worker is running more than one daemon pod Jun 2 12:31:10.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:10.393: INFO: Number of nodes with available pods: 0 Jun 2 12:31:10.393: INFO: Node hunter-worker is running more than one daemon pod Jun 2 12:31:11.419: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:11.423: INFO: Number of nodes with available pods: 0 Jun 2 12:31:11.423: INFO: Node hunter-worker is running more than one daemon pod Jun 2 12:31:12.465: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:12.467: INFO: Number of nodes with available pods: 0 Jun 2 12:31:12.467: INFO: Node hunter-worker is running more than one daemon pod Jun 2 12:31:13.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:13.394: INFO: Number of nodes with available pods: 0 Jun 2 12:31:13.394: INFO: Node hunter-worker is running more than one daemon pod Jun 2 12:31:14.389: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:14.393: INFO: Number of nodes with available pods: 2 Jun 2 12:31:14.393: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 2 12:31:14.483: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:14.510: INFO: Number of nodes with available pods: 1 Jun 2 12:31:14.510: INFO: Node hunter-worker2 is running more than one daemon pod Jun 2 12:31:15.515: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:15.519: INFO: Number of nodes with available pods: 1 Jun 2 12:31:15.519: INFO: Node hunter-worker2 is running more than one daemon pod Jun 2 12:31:16.542: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:16.582: INFO: Number of nodes with available pods: 1 Jun 2 12:31:16.582: INFO: Node hunter-worker2 is running more than one daemon pod Jun 2 12:31:17.514: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:17.518: INFO: Number of nodes with available pods: 1 Jun 2 12:31:17.518: INFO: Node hunter-worker2 is running more than one daemon pod Jun 2 12:31:18.515: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 2 12:31:18.518: INFO: Number of nodes with available pods: 2 Jun 2 12:31:18.518: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-8xhqj, will wait for the garbage collector to delete the pods Jun 2 12:31:18.583: INFO: Deleting DaemonSet.extensions daemon-set took: 5.849496ms Jun 2 12:31:18.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.271434ms Jun 2 12:31:31.806: INFO: Number of nodes with available pods: 0 Jun 2 12:31:31.806: INFO: Number of running nodes: 0, number of available pods: 0 Jun 2 12:31:31.808: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8xhqj/daemonsets","resourceVersion":"13836993"},"items":null} Jun 2 12:31:31.810: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8xhqj/pods","resourceVersion":"13836993"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 2 12:31:31.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8xhqj" for this suite. Jun 2 12:31:37.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 2 12:31:37.884: INFO: namespace: e2e-tests-daemonsets-8xhqj, resource: bindings, ignored listing per whitelist Jun 2 12:31:37.933: INFO: namespace e2e-tests-daemonsets-8xhqj deletion completed in 6.111964985s • [SLOW TEST:28.715 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSJun 2 12:31:37.934: INFO: Running AfterSuite actions on all nodes Jun 2 12:31:37.934: INFO: Running AfterSuite actions on node 1 Jun 2 12:31:37.934: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6291.662 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS