I0604 10:46:53.984600 6 e2e.go:224] Starting e2e run "b3f299bb-a650-11ea-86dc-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1591267613 - Will randomize all specs Will run 201 of 2164 specs Jun 4 10:46:54.187: INFO: >>> kubeConfig: /root/.kube/config Jun 4 10:46:54.190: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 4 10:46:54.207: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 4 10:46:54.243: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 4 10:46:54.243: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 4 10:46:54.243: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 4 10:46:54.252: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 4 10:46:54.252: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 4 10:46:54.252: INFO: e2e test version: v1.13.12 Jun 4 10:46:54.254: INFO: kube-apiserver version: v1.13.12 SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:46:54.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jun 4 10:46:54.343: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jun 4 10:46:54.345: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix887331052/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:46:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wtczx" for this suite. Jun 4 10:47:00.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:47:00.521: INFO: namespace: e2e-tests-kubectl-wtczx, resource: bindings, ignored listing per whitelist Jun 4 10:47:00.547: INFO: namespace e2e-tests-kubectl-wtczx deletion completed in 6.131027596s • [SLOW TEST:6.293 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:47:00.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b8404202-a650-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b8404202-a650-11ea-86dc-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:48:23.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wrgnq" for this suite. Jun 4 10:48:45.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:48:45.135: INFO: namespace: e2e-tests-projected-wrgnq, resource: bindings, ignored listing per whitelist Jun 4 10:48:45.199: INFO: namespace e2e-tests-projected-wrgnq deletion completed in 22.092049897s • [SLOW TEST:104.652 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:48:45.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 4 10:48:45.330: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 4 10:48:45.337: INFO: Waiting for terminating namespaces to be deleted... Jun 4 10:48:45.339: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 4 10:48:45.345: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 4 10:48:45.345: INFO: Container kube-proxy ready: true, restart count 0 Jun 4 10:48:45.345: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 10:48:45.345: INFO: Container kindnet-cni ready: true, restart count 0 Jun 4 10:48:45.345: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 4 10:48:45.345: INFO: Container coredns ready: true, restart count 0 Jun 4 10:48:45.345: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 4 10:48:45.350: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 10:48:45.350: INFO: Container kindnet-cni ready: true, restart count 0 Jun 4 10:48:45.350: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 4 10:48:45.350: INFO: Container coredns ready: true, restart count 0 Jun 4 10:48:45.350: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 10:48:45.350: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16155190b8e00715], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:48:46.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-vp9xl" for this suite. Jun 4 10:48:52.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:48:52.453: INFO: namespace: e2e-tests-sched-pred-vp9xl, resource: bindings, ignored listing per whitelist Jun 4 10:48:52.513: INFO: namespace e2e-tests-sched-pred-vp9xl deletion completed in 6.140108648s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.313 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:48:52.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:48:52.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-zh8t2" for this suite. Jun 4 10:49:14.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:49:14.730: INFO: namespace: e2e-tests-kubelet-test-zh8t2, resource: bindings, ignored listing per whitelist Jun 4 10:49:14.787: INFO: namespace e2e-tests-kubelet-test-zh8t2 deletion completed in 22.128089039s • [SLOW TEST:22.274 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:49:14.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-083a4561-a651-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 10:49:14.963: INFO: Waiting up to 5m0s for pod "pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-5l6kq" to be "success or failure" Jun 4 10:49:14.967: INFO: Pod "pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.78073ms Jun 4 10:49:17.047: INFO: Pod "pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083420467s Jun 4 10:49:19.052: INFO: Pod "pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088303113s STEP: Saw pod success Jun 4 10:49:19.052: INFO: Pod "pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:49:19.055: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 10:49:19.096: INFO: Waiting for pod pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018 to disappear Jun 4 10:49:19.120: INFO: Pod pod-configmaps-083d8f34-a651-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:49:19.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5l6kq" for this suite. Jun 4 10:49:25.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:49:25.179: INFO: namespace: e2e-tests-configmap-5l6kq, resource: bindings, ignored listing per whitelist Jun 4 10:49:25.232: INFO: namespace e2e-tests-configmap-5l6kq deletion completed in 6.108301271s • [SLOW TEST:10.445 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:49:25.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-t6ddh STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 4 10:49:25.301: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 4 10:49:45.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.36:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-t6ddh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 10:49:45.467: INFO: >>> kubeConfig: /root/.kube/config I0604 10:49:45.501329 6 log.go:172] (0xc0006e3e40) (0xc0015a19a0) Create stream I0604 10:49:45.501357 6 log.go:172] (0xc0006e3e40) (0xc0015a19a0) Stream added, broadcasting: 1 I0604 10:49:45.503765 6 log.go:172] (0xc0006e3e40) Reply frame received for 1 I0604 10:49:45.503806 6 log.go:172] (0xc0006e3e40) (0xc000853d60) Create stream I0604 10:49:45.503821 6 log.go:172] (0xc0006e3e40) (0xc000853d60) Stream added, broadcasting: 3 I0604 10:49:45.505099 6 log.go:172] (0xc0006e3e40) Reply frame received for 3 I0604 10:49:45.505328 6 log.go:172] (0xc0006e3e40) (0xc00131a640) Create stream I0604 10:49:45.505549 6 log.go:172] (0xc0006e3e40) (0xc00131a640) Stream added, broadcasting: 5 I0604 10:49:45.506509 6 log.go:172] (0xc0006e3e40) Reply frame received for 5 I0604 10:49:45.646782 6 log.go:172] (0xc0006e3e40) Data frame received for 3 I0604 10:49:45.646856 6 log.go:172] (0xc000853d60) (3) Data frame handling I0604 10:49:45.646891 6 log.go:172] (0xc000853d60) (3) Data frame sent I0604 10:49:45.646917 6 log.go:172] (0xc0006e3e40) Data frame received for 3 I0604 10:49:45.646941 6 log.go:172] (0xc000853d60) (3) Data frame handling I0604 10:49:45.646975 6 log.go:172] (0xc0006e3e40) Data frame received for 5 I0604 10:49:45.647013 6 log.go:172] (0xc00131a640) (5) Data frame handling I0604 10:49:45.649605 6 log.go:172] (0xc0006e3e40) Data frame received for 1 I0604 10:49:45.649638 6 log.go:172] (0xc0015a19a0) (1) Data frame handling I0604 10:49:45.649669 6 log.go:172] (0xc0015a19a0) (1) Data frame sent I0604 10:49:45.649785 6 log.go:172] (0xc0006e3e40) (0xc0015a19a0) Stream removed, broadcasting: 1 I0604 10:49:45.649889 6 log.go:172] (0xc0006e3e40) Go away received I0604 10:49:45.650068 6 log.go:172] (0xc0006e3e40) (0xc0015a19a0) Stream removed, broadcasting: 1 I0604 10:49:45.650100 6 log.go:172] (0xc0006e3e40) (0xc000853d60) Stream removed, broadcasting: 3 I0604 10:49:45.650123 6 log.go:172] (0xc0006e3e40) (0xc00131a640) Stream removed, broadcasting: 5 Jun 4 10:49:45.650: INFO: Found all expected endpoints: [netserver-0] Jun 4 10:49:45.653: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.233:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-t6ddh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 10:49:45.653: INFO: >>> kubeConfig: /root/.kube/config I0604 10:49:45.679472 6 log.go:172] (0xc000b3c580) (0xc0015a1cc0) Create stream I0604 10:49:45.679495 6 log.go:172] (0xc000b3c580) (0xc0015a1cc0) Stream added, broadcasting: 1 I0604 10:49:45.681764 6 log.go:172] (0xc000b3c580) Reply frame received for 1 I0604 10:49:45.681795 6 log.go:172] (0xc000b3c580) (0xc0008b70e0) Create stream I0604 10:49:45.681805 6 log.go:172] (0xc000b3c580) (0xc0008b70e0) Stream added, broadcasting: 3 I0604 10:49:45.682681 6 log.go:172] (0xc000b3c580) Reply frame received for 3 I0604 10:49:45.682714 6 log.go:172] (0xc000b3c580) (0xc000853e00) Create stream I0604 10:49:45.682725 6 log.go:172] (0xc000b3c580) (0xc000853e00) Stream added, broadcasting: 5 I0604 10:49:45.683443 6 log.go:172] (0xc000b3c580) Reply frame received for 5 I0604 10:49:45.751134 6 log.go:172] (0xc000b3c580) Data frame received for 3 I0604 10:49:45.751158 6 log.go:172] (0xc0008b70e0) (3) Data frame handling I0604 10:49:45.751172 6 log.go:172] (0xc0008b70e0) (3) Data frame sent I0604 10:49:45.751188 6 log.go:172] (0xc000b3c580) Data frame received for 3 I0604 10:49:45.751292 6 log.go:172] (0xc0008b70e0) (3) Data frame handling I0604 10:49:45.751400 6 log.go:172] (0xc000b3c580) Data frame received for 5 I0604 10:49:45.751459 6 log.go:172] (0xc000853e00) (5) Data frame handling I0604 10:49:45.753075 6 log.go:172] (0xc000b3c580) Data frame received for 1 I0604 10:49:45.753104 6 log.go:172] (0xc0015a1cc0) (1) Data frame handling I0604 10:49:45.753310 6 log.go:172] (0xc0015a1cc0) (1) Data frame sent I0604 10:49:45.753348 6 log.go:172] (0xc000b3c580) (0xc0015a1cc0) Stream removed, broadcasting: 1 I0604 10:49:45.753413 6 log.go:172] (0xc000b3c580) Go away received I0604 10:49:45.753463 6 log.go:172] (0xc000b3c580) (0xc0015a1cc0) Stream removed, broadcasting: 1 I0604 10:49:45.753478 6 log.go:172] (0xc000b3c580) (0xc0008b70e0) Stream removed, broadcasting: 3 I0604 10:49:45.753484 6 log.go:172] (0xc000b3c580) (0xc000853e00) Stream removed, broadcasting: 5 Jun 4 10:49:45.753: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:49:45.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-t6ddh" for this suite. Jun 4 10:50:09.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:50:09.845: INFO: namespace: e2e-tests-pod-network-test-t6ddh, resource: bindings, ignored listing per whitelist Jun 4 10:50:09.868: INFO: namespace e2e-tests-pod-network-test-t6ddh deletion completed in 24.110660615s • [SLOW TEST:44.636 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:50:09.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 10:50:09.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-jjqx2' Jun 4 10:50:12.358: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 4 10:50:12.358: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jun 4 10:50:16.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-jjqx2' Jun 4 10:50:16.490: INFO: stderr: "" Jun 4 10:50:16.490: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:50:16.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jjqx2" for this suite. Jun 4 10:50:22.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:50:22.612: INFO: namespace: e2e-tests-kubectl-jjqx2, resource: bindings, ignored listing per whitelist Jun 4 10:50:22.678: INFO: namespace e2e-tests-kubectl-jjqx2 deletion completed in 6.166352439s • [SLOW TEST:12.810 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:50:22.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jun 4 10:50:22.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-l5mfb' Jun 4 10:50:23.127: INFO: stderr: "" Jun 4 10:50:23.127: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jun 4 10:50:24.131: INFO: Selector matched 1 pods for map[app:redis] Jun 4 10:50:24.131: INFO: Found 0 / 1 Jun 4 10:50:25.132: INFO: Selector matched 1 pods for map[app:redis] Jun 4 10:50:25.132: INFO: Found 0 / 1 Jun 4 10:50:26.131: INFO: Selector matched 1 pods for map[app:redis] Jun 4 10:50:26.131: INFO: Found 0 / 1 Jun 4 10:50:27.131: INFO: Selector matched 1 pods for map[app:redis] Jun 4 10:50:27.132: INFO: Found 1 / 1 Jun 4 10:50:27.132: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 4 10:50:27.135: INFO: Selector matched 1 pods for map[app:redis] Jun 4 10:50:27.135: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 4 10:50:27.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xw986 redis-master --namespace=e2e-tests-kubectl-l5mfb' Jun 4 10:50:27.270: INFO: stderr: "" Jun 4 10:50:27.270: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jun 10:50:25.929 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jun 10:50:25.931 # Server started, Redis version 3.2.12\n1:M 04 Jun 10:50:25.932 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jun 10:50:25.932 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 4 10:50:27.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xw986 redis-master --namespace=e2e-tests-kubectl-l5mfb --tail=1' Jun 4 10:50:27.414: INFO: stderr: "" Jun 4 10:50:27.414: INFO: stdout: "1:M 04 Jun 10:50:25.932 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 4 10:50:27.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xw986 redis-master --namespace=e2e-tests-kubectl-l5mfb --limit-bytes=1' Jun 4 10:50:27.516: INFO: stderr: "" Jun 4 10:50:27.516: INFO: stdout: " " STEP: exposing timestamps Jun 4 10:50:27.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xw986 redis-master --namespace=e2e-tests-kubectl-l5mfb --tail=1 --timestamps' Jun 4 10:50:27.638: INFO: stderr: "" Jun 4 10:50:27.638: INFO: stdout: "2020-06-04T10:50:25.93244235Z 1:M 04 Jun 10:50:25.932 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 4 10:50:30.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xw986 redis-master --namespace=e2e-tests-kubectl-l5mfb --since=1s' Jun 4 10:50:30.259: INFO: stderr: "" Jun 4 10:50:30.259: INFO: stdout: "" Jun 4 10:50:30.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-xw986 redis-master --namespace=e2e-tests-kubectl-l5mfb --since=24h' Jun 4 10:50:30.360: INFO: stderr: "" Jun 4 10:50:30.360: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jun 10:50:25.929 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jun 10:50:25.931 # Server started, Redis version 3.2.12\n1:M 04 Jun 10:50:25.932 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jun 10:50:25.932 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jun 4 10:50:30.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-l5mfb' Jun 4 10:50:30.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 10:50:30.472: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 4 10:50:30.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-l5mfb' Jun 4 10:50:30.569: INFO: stderr: "No resources found.\n" Jun 4 10:50:30.569: INFO: stdout: "" Jun 4 10:50:30.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-l5mfb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 4 10:50:30.663: INFO: stderr: "" Jun 4 10:50:30.663: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:50:30.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l5mfb" for this suite. Jun 4 10:50:52.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:50:52.907: INFO: namespace: e2e-tests-kubectl-l5mfb, resource: bindings, ignored listing per whitelist Jun 4 10:50:52.919: INFO: namespace e2e-tests-kubectl-l5mfb deletion completed in 22.252744206s • [SLOW TEST:30.241 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:50:52.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 10:50:53.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-r5vph" to be "success or failure" Jun 4 10:50:53.038: INFO: Pod "downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.770095ms Jun 4 10:50:55.042: INFO: Pod "downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007735467s Jun 4 10:50:57.047: INFO: Pod "downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011883037s STEP: Saw pod success Jun 4 10:50:57.047: INFO: Pod "downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:50:57.049: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 10:50:57.069: INFO: Waiting for pod downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018 to disappear Jun 4 10:50:57.073: INFO: Pod downwardapi-volume-42b784bd-a651-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:50:57.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r5vph" for this suite. Jun 4 10:51:03.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:51:03.164: INFO: namespace: e2e-tests-projected-r5vph, resource: bindings, ignored listing per whitelist Jun 4 10:51:03.170: INFO: namespace e2e-tests-projected-r5vph deletion completed in 6.092823694s • [SLOW TEST:10.250 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:51:03.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jun 4 10:51:03.289: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:51:03.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kp2gd" for this suite. Jun 4 10:51:09.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:51:09.448: INFO: namespace: e2e-tests-kubectl-kp2gd, resource: bindings, ignored listing per whitelist Jun 4 10:51:09.511: INFO: namespace e2e-tests-kubectl-kp2gd deletion completed in 6.129341778s • [SLOW TEST:6.341 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:51:09.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 10:51:09.633: INFO: Creating deployment "test-recreate-deployment" Jun 4 10:51:09.643: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 4 10:51:09.669: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jun 4 10:51:11.676: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 4 10:51:11.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726864669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726864669, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726864669, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726864669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 10:51:13.725: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 4 10:51:13.730: INFO: Updating deployment test-recreate-deployment Jun 4 10:51:13.730: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 4 10:51:14.087: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-vjs5h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vjs5h/deployments/test-recreate-deployment,UID:4ca04413-a651-11ea-99e8-0242ac110002,ResourceVersion:14162106,Generation:2,CreationTimestamp:2020-06-04 10:51:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-04 10:51:13 +0000 UTC 2020-06-04 10:51:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-04 10:51:14 +0000 UTC 2020-06-04 10:51:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 4 10:51:14.092: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-vjs5h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vjs5h/replicasets/test-recreate-deployment-589c4bfd,UID:4f27bbe9-a651-11ea-99e8-0242ac110002,ResourceVersion:14162105,Generation:1,CreationTimestamp:2020-06-04 10:51:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4ca04413-a651-11ea-99e8-0242ac110002 0xc0018dbf9f 0xc0018dbfd0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 4 10:51:14.092: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 4 10:51:14.092: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-vjs5h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vjs5h/replicasets/test-recreate-deployment-5bf7f65dc,UID:4ca5852e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162095,Generation:2,CreationTimestamp:2020-06-04 10:51:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4ca04413-a651-11ea-99e8-0242ac110002 0xc0016fe090 0xc0016fe091}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 4 10:51:14.095: INFO: Pod "test-recreate-deployment-589c4bfd-4j7ch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-4j7ch,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-vjs5h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vjs5h/pods/test-recreate-deployment-589c4bfd-4j7ch,UID:4f2a58b1-a651-11ea-99e8-0242ac110002,ResourceVersion:14162101,Generation:0,CreationTimestamp:2020-06-04 10:51:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 4f27bbe9-a651-11ea-99e8-0242ac110002 0xc0016fea7f 0xc0016fea90}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-twgn8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-twgn8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-twgn8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016feb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016feb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 10:51:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:51:14.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vjs5h" for this suite. Jun 4 10:51:20.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:51:20.414: INFO: namespace: e2e-tests-deployment-vjs5h, resource: bindings, ignored listing per whitelist Jun 4 10:51:20.460: INFO: namespace e2e-tests-deployment-vjs5h deletion completed in 6.360814503s • [SLOW TEST:10.949 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:51:20.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jun 4 10:51:20.623: INFO: Waiting up to 5m0s for pod "client-containers-5329ba1e-a651-11ea-86dc-0242ac110018" in namespace "e2e-tests-containers-lhsvb" to be "success or failure" Jun 4 10:51:20.649: INFO: Pod "client-containers-5329ba1e-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.346748ms Jun 4 10:51:22.655: INFO: Pod "client-containers-5329ba1e-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031942498s Jun 4 10:51:24.659: INFO: Pod "client-containers-5329ba1e-a651-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035495014s STEP: Saw pod success Jun 4 10:51:24.659: INFO: Pod "client-containers-5329ba1e-a651-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:51:24.661: INFO: Trying to get logs from node hunter-worker pod client-containers-5329ba1e-a651-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 10:51:24.694: INFO: Waiting for pod client-containers-5329ba1e-a651-11ea-86dc-0242ac110018 to disappear Jun 4 10:51:24.714: INFO: Pod client-containers-5329ba1e-a651-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:51:24.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lhsvb" for this suite. Jun 4 10:51:30.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:51:30.794: INFO: namespace: e2e-tests-containers-lhsvb, resource: bindings, ignored listing per whitelist Jun 4 10:51:30.802: INFO: namespace e2e-tests-containers-lhsvb deletion completed in 6.084480238s • [SLOW TEST:10.342 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:51:30.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-594b3af0-a651-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 10:51:30.908: INFO: Waiting up to 5m0s for pod "pod-secrets-594d453f-a651-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-krlpj" to be "success or failure" Jun 4 10:51:31.463: INFO: Pod "pod-secrets-594d453f-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 554.631521ms Jun 4 10:51:33.486: INFO: Pod "pod-secrets-594d453f-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577805253s Jun 4 10:51:35.490: INFO: Pod "pod-secrets-594d453f-a651-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.58221751s STEP: Saw pod success Jun 4 10:51:35.491: INFO: Pod "pod-secrets-594d453f-a651-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:51:35.494: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-594d453f-a651-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 10:51:35.514: INFO: Waiting for pod pod-secrets-594d453f-a651-11ea-86dc-0242ac110018 to disappear Jun 4 10:51:35.518: INFO: Pod pod-secrets-594d453f-a651-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:51:35.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-krlpj" for this suite. Jun 4 10:51:41.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:51:41.598: INFO: namespace: e2e-tests-secrets-krlpj, resource: bindings, ignored listing per whitelist Jun 4 10:51:41.615: INFO: namespace e2e-tests-secrets-krlpj deletion completed in 6.093416003s • [SLOW TEST:10.813 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:51:41.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 4 10:51:41.893: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-phgc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-phgc4/configmaps/e2e-watch-test-label-changed,UID:5fc2ac6e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162242,Generation:0,CreationTimestamp:2020-06-04 10:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 4 10:51:41.894: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-phgc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-phgc4/configmaps/e2e-watch-test-label-changed,UID:5fc2ac6e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162244,Generation:0,CreationTimestamp:2020-06-04 10:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 4 10:51:41.894: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-phgc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-phgc4/configmaps/e2e-watch-test-label-changed,UID:5fc2ac6e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162245,Generation:0,CreationTimestamp:2020-06-04 10:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 4 10:51:51.932: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-phgc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-phgc4/configmaps/e2e-watch-test-label-changed,UID:5fc2ac6e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162266,Generation:0,CreationTimestamp:2020-06-04 10:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 4 10:51:51.932: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-phgc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-phgc4/configmaps/e2e-watch-test-label-changed,UID:5fc2ac6e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162267,Generation:0,CreationTimestamp:2020-06-04 10:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 4 10:51:51.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-phgc4,SelfLink:/api/v1/namespaces/e2e-tests-watch-phgc4/configmaps/e2e-watch-test-label-changed,UID:5fc2ac6e-a651-11ea-99e8-0242ac110002,ResourceVersion:14162268,Generation:0,CreationTimestamp:2020-06-04 10:51:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:51:51.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-phgc4" for this suite. Jun 4 10:51:57.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:51:58.040: INFO: namespace: e2e-tests-watch-phgc4, resource: bindings, ignored listing per whitelist Jun 4 10:51:58.061: INFO: namespace e2e-tests-watch-phgc4 deletion completed in 6.123164406s • [SLOW TEST:16.446 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:51:58.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 4 10:52:06.266: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 4 10:52:06.279: INFO: Pod pod-with-poststart-http-hook still exists Jun 4 10:52:08.280: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 4 10:52:08.284: INFO: Pod pod-with-poststart-http-hook still exists Jun 4 10:52:10.280: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 4 10:52:10.301: INFO: Pod pod-with-poststart-http-hook still exists Jun 4 10:52:12.280: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 4 10:52:12.283: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:52:12.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-nrndg" for this suite. Jun 4 10:52:34.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:52:34.466: INFO: namespace: e2e-tests-container-lifecycle-hook-nrndg, resource: bindings, ignored listing per whitelist Jun 4 10:52:34.473: INFO: namespace e2e-tests-container-lifecycle-hook-nrndg deletion completed in 22.166361285s • [SLOW TEST:36.412 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:52:34.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7f409ac8-a651-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 10:52:34.605: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-5v7td" to be "success or failure" Jun 4 10:52:34.621: INFO: Pod "pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.463361ms Jun 4 10:52:36.676: INFO: Pod "pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070705828s Jun 4 10:52:38.681: INFO: Pod "pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.075199324s Jun 4 10:52:40.684: INFO: Pod "pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.078787423s STEP: Saw pod success Jun 4 10:52:40.684: INFO: Pod "pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:52:40.687: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 10:52:40.707: INFO: Waiting for pod pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018 to disappear Jun 4 10:52:40.712: INFO: Pod pod-projected-configmaps-7f43377a-a651-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:52:40.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5v7td" for this suite. Jun 4 10:52:46.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:52:46.750: INFO: namespace: e2e-tests-projected-5v7td, resource: bindings, ignored listing per whitelist Jun 4 10:52:46.812: INFO: namespace e2e-tests-projected-5v7td deletion completed in 6.096806766s • [SLOW TEST:12.339 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:52:46.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-86a203a6-a651-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:52:51.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vqwmh" for this suite. Jun 4 10:53:14.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:53:14.133: INFO: namespace: e2e-tests-configmap-vqwmh, resource: bindings, ignored listing per whitelist Jun 4 10:53:14.225: INFO: namespace e2e-tests-configmap-vqwmh deletion completed in 22.224533537s • [SLOW TEST:27.412 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:53:14.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:53:20.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-lqdr7" for this suite. Jun 4 10:53:26.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:53:26.851: INFO: namespace: e2e-tests-namespaces-lqdr7, resource: bindings, ignored listing per whitelist Jun 4 10:53:26.870: INFO: namespace e2e-tests-namespaces-lqdr7 deletion completed in 6.115227585s STEP: Destroying namespace "e2e-tests-nsdeletetest-k27mn" for this suite. Jun 4 10:53:26.872: INFO: Namespace e2e-tests-nsdeletetest-k27mn was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-jglb8" for this suite. Jun 4 10:53:32.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:53:32.950: INFO: namespace: e2e-tests-nsdeletetest-jglb8, resource: bindings, ignored listing per whitelist Jun 4 10:53:32.968: INFO: namespace e2e-tests-nsdeletetest-jglb8 deletion completed in 6.096398153s • [SLOW TEST:18.743 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:53:32.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 4 10:53:33.132: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 4 10:53:33.140: INFO: Waiting for terminating namespaces to be deleted... Jun 4 10:53:33.142: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 4 10:53:33.147: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 4 10:53:33.147: INFO: Container kube-proxy ready: true, restart count 0 Jun 4 10:53:33.148: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 10:53:33.148: INFO: Container kindnet-cni ready: true, restart count 0 Jun 4 10:53:33.148: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 4 10:53:33.148: INFO: Container coredns ready: true, restart count 0 Jun 4 10:53:33.148: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 4 10:53:33.152: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 10:53:33.152: INFO: Container kindnet-cni ready: true, restart count 0 Jun 4 10:53:33.152: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 4 10:53:33.152: INFO: Container coredns ready: true, restart count 0 Jun 4 10:53:33.152: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 10:53:33.152: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a4988519-a651-11ea-86dc-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a4988519-a651-11ea-86dc-0242ac110018 off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a4988519-a651-11ea-86dc-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:53:41.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-qcqt8" for this suite. Jun 4 10:53:55.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:53:55.490: INFO: namespace: e2e-tests-sched-pred-qcqt8, resource: bindings, ignored listing per whitelist Jun 4 10:53:55.496: INFO: namespace e2e-tests-sched-pred-qcqt8 deletion completed in 14.132839141s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.527 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:53:55.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-v8m2b STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 4 10:53:55.625: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 4 10:54:19.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.44:8080/dial?request=hostName&protocol=udp&host=10.244.1.242&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-v8m2b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 10:54:19.805: INFO: >>> kubeConfig: /root/.kube/config I0604 10:54:19.833990 6 log.go:172] (0xc000b3c4d0) (0xc0016cba40) Create stream I0604 10:54:19.834016 6 log.go:172] (0xc000b3c4d0) (0xc0016cba40) Stream added, broadcasting: 1 I0604 10:54:19.836020 6 log.go:172] (0xc000b3c4d0) Reply frame received for 1 I0604 10:54:19.836060 6 log.go:172] (0xc000b3c4d0) (0xc001e44a00) Create stream I0604 10:54:19.836073 6 log.go:172] (0xc000b3c4d0) (0xc001e44a00) Stream added, broadcasting: 3 I0604 10:54:19.836908 6 log.go:172] (0xc000b3c4d0) Reply frame received for 3 I0604 10:54:19.836943 6 log.go:172] (0xc000b3c4d0) (0xc0016cbae0) Create stream I0604 10:54:19.836954 6 log.go:172] (0xc000b3c4d0) (0xc0016cbae0) Stream added, broadcasting: 5 I0604 10:54:19.837871 6 log.go:172] (0xc000b3c4d0) Reply frame received for 5 I0604 10:54:19.942376 6 log.go:172] (0xc000b3c4d0) Data frame received for 3 I0604 10:54:19.942403 6 log.go:172] (0xc001e44a00) (3) Data frame handling I0604 10:54:19.942419 6 log.go:172] (0xc001e44a00) (3) Data frame sent I0604 10:54:19.942902 6 log.go:172] (0xc000b3c4d0) Data frame received for 3 I0604 10:54:19.942929 6 log.go:172] (0xc001e44a00) (3) Data frame handling I0604 10:54:19.943154 6 log.go:172] (0xc000b3c4d0) Data frame received for 5 I0604 10:54:19.943174 6 log.go:172] (0xc0016cbae0) (5) Data frame handling I0604 10:54:19.944693 6 log.go:172] (0xc000b3c4d0) Data frame received for 1 I0604 10:54:19.944712 6 log.go:172] (0xc0016cba40) (1) Data frame handling I0604 10:54:19.944723 6 log.go:172] (0xc0016cba40) (1) Data frame sent I0604 10:54:19.944732 6 log.go:172] (0xc000b3c4d0) (0xc0016cba40) Stream removed, broadcasting: 1 I0604 10:54:19.944801 6 log.go:172] (0xc000b3c4d0) Go away received I0604 10:54:19.944842 6 log.go:172] (0xc000b3c4d0) (0xc0016cba40) Stream removed, broadcasting: 1 I0604 10:54:19.944875 6 log.go:172] (0xc000b3c4d0) (0xc001e44a00) Stream removed, broadcasting: 3 I0604 10:54:19.944893 6 log.go:172] (0xc000b3c4d0) (0xc0016cbae0) Stream removed, broadcasting: 5 Jun 4 10:54:19.944: INFO: Waiting for endpoints: map[] Jun 4 10:54:19.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.44:8080/dial?request=hostName&protocol=udp&host=10.244.2.43&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-v8m2b PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 10:54:19.961: INFO: >>> kubeConfig: /root/.kube/config I0604 10:54:19.999475 6 log.go:172] (0xc000b642c0) (0xc001e44f00) Create stream I0604 10:54:19.999524 6 log.go:172] (0xc000b642c0) (0xc001e44f00) Stream added, broadcasting: 1 I0604 10:54:20.003081 6 log.go:172] (0xc000b642c0) Reply frame received for 1 I0604 10:54:20.003131 6 log.go:172] (0xc000b642c0) (0xc0016cbc20) Create stream I0604 10:54:20.003304 6 log.go:172] (0xc000b642c0) (0xc0016cbc20) Stream added, broadcasting: 3 I0604 10:54:20.004386 6 log.go:172] (0xc000b642c0) Reply frame received for 3 I0604 10:54:20.004430 6 log.go:172] (0xc000b642c0) (0xc0013eba40) Create stream I0604 10:54:20.004450 6 log.go:172] (0xc000b642c0) (0xc0013eba40) Stream added, broadcasting: 5 I0604 10:54:20.005702 6 log.go:172] (0xc000b642c0) Reply frame received for 5 I0604 10:54:20.079260 6 log.go:172] (0xc000b642c0) Data frame received for 3 I0604 10:54:20.079282 6 log.go:172] (0xc0016cbc20) (3) Data frame handling I0604 10:54:20.079295 6 log.go:172] (0xc0016cbc20) (3) Data frame sent I0604 10:54:20.079682 6 log.go:172] (0xc000b642c0) Data frame received for 5 I0604 10:54:20.079700 6 log.go:172] (0xc0013eba40) (5) Data frame handling I0604 10:54:20.079734 6 log.go:172] (0xc000b642c0) Data frame received for 3 I0604 10:54:20.079745 6 log.go:172] (0xc0016cbc20) (3) Data frame handling I0604 10:54:20.082147 6 log.go:172] (0xc000b642c0) Data frame received for 1 I0604 10:54:20.082175 6 log.go:172] (0xc001e44f00) (1) Data frame handling I0604 10:54:20.082202 6 log.go:172] (0xc001e44f00) (1) Data frame sent I0604 10:54:20.082219 6 log.go:172] (0xc000b642c0) (0xc001e44f00) Stream removed, broadcasting: 1 I0604 10:54:20.082240 6 log.go:172] (0xc000b642c0) Go away received I0604 10:54:20.082313 6 log.go:172] (0xc000b642c0) (0xc001e44f00) Stream removed, broadcasting: 1 I0604 10:54:20.082335 6 log.go:172] (0xc000b642c0) (0xc0016cbc20) Stream removed, broadcasting: 3 I0604 10:54:20.082348 6 log.go:172] (0xc000b642c0) (0xc0013eba40) Stream removed, broadcasting: 5 Jun 4 10:54:20.082: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:54:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-v8m2b" for this suite. Jun 4 10:54:44.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:54:44.188: INFO: namespace: e2e-tests-pod-network-test-v8m2b, resource: bindings, ignored listing per whitelist Jun 4 10:54:44.193: INFO: namespace e2e-tests-pod-network-test-v8m2b deletion completed in 24.106998233s • [SLOW TEST:48.697 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:54:44.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 10:54:44.311: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 4 10:54:44.319: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:44.338: INFO: Number of nodes with available pods: 0 Jun 4 10:54:44.338: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:54:45.343: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:45.347: INFO: Number of nodes with available pods: 0 Jun 4 10:54:45.347: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:54:46.642: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:46.648: INFO: Number of nodes with available pods: 0 Jun 4 10:54:46.648: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:54:47.376: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:47.380: INFO: Number of nodes with available pods: 0 Jun 4 10:54:47.380: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:54:48.341: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:48.344: INFO: Number of nodes with available pods: 0 Jun 4 10:54:48.344: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:54:49.342: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:49.346: INFO: Number of nodes with available pods: 2 Jun 4 10:54:49.346: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 4 10:54:49.387: INFO: Wrong image for pod: daemon-set-9h8pg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:49.387: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:49.406: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:50.411: INFO: Wrong image for pod: daemon-set-9h8pg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:50.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:50.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:51.429: INFO: Wrong image for pod: daemon-set-9h8pg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:51.429: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:51.432: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:52.409: INFO: Wrong image for pod: daemon-set-9h8pg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:52.409: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:52.412: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:53.411: INFO: Wrong image for pod: daemon-set-9h8pg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:53.411: INFO: Pod daemon-set-9h8pg is not available Jun 4 10:54:53.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:53.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:54.410: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:54.410: INFO: Pod daemon-set-pzgs7 is not available Jun 4 10:54:54.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:55.892: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:55.892: INFO: Pod daemon-set-pzgs7 is not available Jun 4 10:54:55.896: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:56.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:56.411: INFO: Pod daemon-set-pzgs7 is not available Jun 4 10:54:56.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:57.410: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:57.410: INFO: Pod daemon-set-pzgs7 is not available Jun 4 10:54:57.414: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:58.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:58.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:54:59.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:54:59.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:00.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:00.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:00.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:01.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:01.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:01.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:02.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:02.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:02.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:03.414: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:03.414: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:03.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:04.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:04.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:04.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:05.410: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:05.410: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:05.414: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:06.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:06.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:06.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:07.410: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:07.410: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:07.412: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:08.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:08.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:08.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:09.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:09.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:09.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:10.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:10.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:10.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:11.411: INFO: Wrong image for pod: daemon-set-lzsqz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 4 10:55:11.411: INFO: Pod daemon-set-lzsqz is not available Jun 4 10:55:11.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:12.410: INFO: Pod daemon-set-pbxfd is not available Jun 4 10:55:12.415: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 4 10:55:12.419: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:12.422: INFO: Number of nodes with available pods: 1 Jun 4 10:55:12.422: INFO: Node hunter-worker2 is running more than one daemon pod Jun 4 10:55:13.426: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:13.430: INFO: Number of nodes with available pods: 1 Jun 4 10:55:13.430: INFO: Node hunter-worker2 is running more than one daemon pod Jun 4 10:55:14.436: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:14.447: INFO: Number of nodes with available pods: 1 Jun 4 10:55:14.447: INFO: Node hunter-worker2 is running more than one daemon pod Jun 4 10:55:15.427: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 10:55:15.430: INFO: Number of nodes with available pods: 2 Jun 4 10:55:15.430: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c8kfp, will wait for the garbage collector to delete the pods Jun 4 10:55:15.510: INFO: Deleting DaemonSet.extensions daemon-set took: 6.950188ms Jun 4 10:55:15.610: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.180401ms Jun 4 10:55:19.913: INFO: Number of nodes with available pods: 0 Jun 4 10:55:19.913: INFO: Number of running nodes: 0, number of available pods: 0 Jun 4 10:55:19.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c8kfp/daemonsets","resourceVersion":"14162996"},"items":null} Jun 4 10:55:19.918: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c8kfp/pods","resourceVersion":"14162996"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:55:19.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-c8kfp" for this suite. Jun 4 10:55:25.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:55:25.958: INFO: namespace: e2e-tests-daemonsets-c8kfp, resource: bindings, ignored listing per whitelist Jun 4 10:55:26.025: INFO: namespace e2e-tests-daemonsets-c8kfp deletion completed in 6.095225903s • [SLOW TEST:41.831 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:55:26.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-2p9pd I0604 10:55:26.159667 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-2p9pd, replica count: 1 I0604 10:55:27.210108 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 10:55:28.210340 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 10:55:29.210547 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 4 10:55:29.359: INFO: Created: latency-svc-mmrqd Jun 4 10:55:29.408: INFO: Got endpoints: latency-svc-mmrqd [97.7953ms] Jun 4 10:55:29.443: INFO: Created: latency-svc-7ctkh Jun 4 10:55:29.458: INFO: Got endpoints: latency-svc-7ctkh [49.743847ms] Jun 4 10:55:29.484: INFO: Created: latency-svc-8gmzq Jun 4 10:55:29.503: INFO: Got endpoints: latency-svc-8gmzq [94.28882ms] Jun 4 10:55:29.549: INFO: Created: latency-svc-m7279 Jun 4 10:55:29.556: INFO: Got endpoints: latency-svc-m7279 [148.003295ms] Jun 4 10:55:29.580: INFO: Created: latency-svc-54xrt Jun 4 10:55:29.597: INFO: Got endpoints: latency-svc-54xrt [188.31694ms] Jun 4 10:55:29.622: INFO: Created: latency-svc-k7mgt Jun 4 10:55:29.639: INFO: Got endpoints: latency-svc-k7mgt [230.500242ms] Jun 4 10:55:29.694: INFO: Created: latency-svc-trc4f Jun 4 10:55:29.711: INFO: Got endpoints: latency-svc-trc4f [302.820849ms] Jun 4 10:55:29.736: INFO: Created: latency-svc-wnbwm Jun 4 10:55:29.754: INFO: Got endpoints: latency-svc-wnbwm [345.264763ms] Jun 4 10:55:29.813: INFO: Created: latency-svc-4hm2t Jun 4 10:55:29.815: INFO: Got endpoints: latency-svc-4hm2t [104.029297ms] Jun 4 10:55:29.874: INFO: Created: latency-svc-dcpvq Jun 4 10:55:29.890: INFO: Got endpoints: latency-svc-dcpvq [481.02939ms] Jun 4 10:55:29.957: INFO: Created: latency-svc-7qpvh Jun 4 10:55:29.960: INFO: Got endpoints: latency-svc-7qpvh [551.028178ms] Jun 4 10:55:29.994: INFO: Created: latency-svc-582w8 Jun 4 10:55:30.004: INFO: Got endpoints: latency-svc-582w8 [595.344631ms] Jun 4 10:55:30.030: INFO: Created: latency-svc-jqcfv Jun 4 10:55:30.040: INFO: Got endpoints: latency-svc-jqcfv [631.861637ms] Jun 4 10:55:30.142: INFO: Created: latency-svc-qd6vm Jun 4 10:55:30.146: INFO: Got endpoints: latency-svc-qd6vm [737.210174ms] Jun 4 10:55:30.192: INFO: Created: latency-svc-d5rmt Jun 4 10:55:30.209: INFO: Got endpoints: latency-svc-d5rmt [800.943471ms] Jun 4 10:55:30.304: INFO: Created: latency-svc-wx76m Jun 4 10:55:30.308: INFO: Got endpoints: latency-svc-wx76m [899.725477ms] Jun 4 10:55:30.336: INFO: Created: latency-svc-trdjb Jun 4 10:55:30.354: INFO: Got endpoints: latency-svc-trdjb [945.009654ms] Jun 4 10:55:30.384: INFO: Created: latency-svc-2pj6k Jun 4 10:55:30.402: INFO: Got endpoints: latency-svc-2pj6k [943.907105ms] Jun 4 10:55:30.447: INFO: Created: latency-svc-z9hr7 Jun 4 10:55:30.456: INFO: Got endpoints: latency-svc-z9hr7 [953.232488ms] Jun 4 10:55:30.504: INFO: Created: latency-svc-bp8vf Jun 4 10:55:30.609: INFO: Got endpoints: latency-svc-bp8vf [1.052608106s] Jun 4 10:55:30.636: INFO: Created: latency-svc-xbc24 Jun 4 10:55:30.685: INFO: Got endpoints: latency-svc-xbc24 [1.08807776s] Jun 4 10:55:30.774: INFO: Created: latency-svc-6t5wn Jun 4 10:55:30.787: INFO: Got endpoints: latency-svc-6t5wn [1.147948087s] Jun 4 10:55:30.828: INFO: Created: latency-svc-6j6tb Jun 4 10:55:30.845: INFO: Got endpoints: latency-svc-6j6tb [1.09139662s] Jun 4 10:55:30.915: INFO: Created: latency-svc-chk9p Jun 4 10:55:30.918: INFO: Got endpoints: latency-svc-chk9p [1.102608772s] Jun 4 10:55:30.955: INFO: Created: latency-svc-lwmkk Jun 4 10:55:30.972: INFO: Got endpoints: latency-svc-lwmkk [1.082183054s] Jun 4 10:55:30.996: INFO: Created: latency-svc-znfkn Jun 4 10:55:31.008: INFO: Got endpoints: latency-svc-znfkn [1.048552806s] Jun 4 10:55:31.071: INFO: Created: latency-svc-pqlkd Jun 4 10:55:31.080: INFO: Got endpoints: latency-svc-pqlkd [1.076510125s] Jun 4 10:55:31.109: INFO: Created: latency-svc-8xmb6 Jun 4 10:55:31.123: INFO: Got endpoints: latency-svc-8xmb6 [1.082837845s] Jun 4 10:55:31.157: INFO: Created: latency-svc-js7zh Jun 4 10:55:31.208: INFO: Got endpoints: latency-svc-js7zh [1.062049671s] Jun 4 10:55:31.235: INFO: Created: latency-svc-p46k4 Jun 4 10:55:31.244: INFO: Got endpoints: latency-svc-p46k4 [1.03394022s] Jun 4 10:55:31.277: INFO: Created: latency-svc-2nlm5 Jun 4 10:55:31.286: INFO: Got endpoints: latency-svc-2nlm5 [977.649472ms] Jun 4 10:55:32.220: INFO: Created: latency-svc-srjrx Jun 4 10:55:32.223: INFO: Got endpoints: latency-svc-srjrx [1.86956656s] Jun 4 10:55:32.268: INFO: Created: latency-svc-vw45x Jun 4 10:55:32.293: INFO: Got endpoints: latency-svc-vw45x [1.89113071s] Jun 4 10:55:32.364: INFO: Created: latency-svc-7jczt Jun 4 10:55:32.367: INFO: Got endpoints: latency-svc-7jczt [1.910905373s] Jun 4 10:55:32.421: INFO: Created: latency-svc-drhkd Jun 4 10:55:32.450: INFO: Got endpoints: latency-svc-drhkd [1.841047969s] Jun 4 10:55:32.513: INFO: Created: latency-svc-zlw9d Jun 4 10:55:32.516: INFO: Got endpoints: latency-svc-zlw9d [1.831468584s] Jun 4 10:55:32.567: INFO: Created: latency-svc-kcd8v Jun 4 10:55:32.588: INFO: Got endpoints: latency-svc-kcd8v [1.801290646s] Jun 4 10:55:32.658: INFO: Created: latency-svc-jgjs6 Jun 4 10:55:32.667: INFO: Got endpoints: latency-svc-jgjs6 [1.821306831s] Jun 4 10:55:32.752: INFO: Created: latency-svc-9dl8x Jun 4 10:55:32.812: INFO: Got endpoints: latency-svc-9dl8x [1.893887174s] Jun 4 10:55:32.824: INFO: Created: latency-svc-jbwxm Jun 4 10:55:32.835: INFO: Got endpoints: latency-svc-jbwxm [1.863457134s] Jun 4 10:55:32.877: INFO: Created: latency-svc-9wr6j Jun 4 10:55:32.908: INFO: Got endpoints: latency-svc-9wr6j [1.899657427s] Jun 4 10:55:32.968: INFO: Created: latency-svc-g5pvc Jun 4 10:55:32.974: INFO: Got endpoints: latency-svc-g5pvc [1.893720205s] Jun 4 10:55:32.997: INFO: Created: latency-svc-s8ctg Jun 4 10:55:33.010: INFO: Got endpoints: latency-svc-s8ctg [1.887288045s] Jun 4 10:55:33.033: INFO: Created: latency-svc-vqljn Jun 4 10:55:33.047: INFO: Got endpoints: latency-svc-vqljn [1.838885425s] Jun 4 10:55:33.142: INFO: Created: latency-svc-8fvxb Jun 4 10:55:33.144: INFO: Got endpoints: latency-svc-8fvxb [1.900409797s] Jun 4 10:55:33.200: INFO: Created: latency-svc-tnqj5 Jun 4 10:55:33.215: INFO: Got endpoints: latency-svc-tnqj5 [1.929255255s] Jun 4 10:55:33.310: INFO: Created: latency-svc-94xs2 Jun 4 10:55:33.316: INFO: Got endpoints: latency-svc-94xs2 [1.093014981s] Jun 4 10:55:33.344: INFO: Created: latency-svc-t97zk Jun 4 10:55:33.360: INFO: Got endpoints: latency-svc-t97zk [1.066806842s] Jun 4 10:55:33.387: INFO: Created: latency-svc-b9ls8 Jun 4 10:55:33.459: INFO: Got endpoints: latency-svc-b9ls8 [1.092031109s] Jun 4 10:55:33.471: INFO: Created: latency-svc-8xdbw Jun 4 10:55:33.481: INFO: Got endpoints: latency-svc-8xdbw [1.030477448s] Jun 4 10:55:33.507: INFO: Created: latency-svc-vhz2l Jun 4 10:55:33.517: INFO: Got endpoints: latency-svc-vhz2l [1.000833632s] Jun 4 10:55:33.543: INFO: Created: latency-svc-bzgmb Jun 4 10:55:33.608: INFO: Got endpoints: latency-svc-bzgmb [1.019818192s] Jun 4 10:55:33.627: INFO: Created: latency-svc-dpjjf Jun 4 10:55:33.644: INFO: Got endpoints: latency-svc-dpjjf [977.451248ms] Jun 4 10:55:33.669: INFO: Created: latency-svc-n6k5n Jun 4 10:55:33.681: INFO: Got endpoints: latency-svc-n6k5n [868.515098ms] Jun 4 10:55:33.705: INFO: Created: latency-svc-wbtxj Jun 4 10:55:33.746: INFO: Got endpoints: latency-svc-wbtxj [910.910921ms] Jun 4 10:55:33.776: INFO: Created: latency-svc-x67rc Jun 4 10:55:33.795: INFO: Got endpoints: latency-svc-x67rc [887.089239ms] Jun 4 10:55:33.818: INFO: Created: latency-svc-s7lzj Jun 4 10:55:33.828: INFO: Got endpoints: latency-svc-s7lzj [853.630557ms] Jun 4 10:55:33.896: INFO: Created: latency-svc-4rl7g Jun 4 10:55:33.899: INFO: Got endpoints: latency-svc-4rl7g [888.651602ms] Jun 4 10:55:33.963: INFO: Created: latency-svc-m25df Jun 4 10:55:33.993: INFO: Got endpoints: latency-svc-m25df [946.12015ms] Jun 4 10:55:34.052: INFO: Created: latency-svc-f7t6l Jun 4 10:55:34.057: INFO: Got endpoints: latency-svc-f7t6l [912.590402ms] Jun 4 10:55:34.088: INFO: Created: latency-svc-z9gt6 Jun 4 10:55:34.103: INFO: Got endpoints: latency-svc-z9gt6 [888.182312ms] Jun 4 10:55:34.131: INFO: Created: latency-svc-74d99 Jun 4 10:55:34.219: INFO: Got endpoints: latency-svc-74d99 [902.760373ms] Jun 4 10:55:34.222: INFO: Created: latency-svc-6skpn Jun 4 10:55:34.244: INFO: Got endpoints: latency-svc-6skpn [884.255393ms] Jun 4 10:55:34.281: INFO: Created: latency-svc-fbm4g Jun 4 10:55:34.296: INFO: Got endpoints: latency-svc-fbm4g [837.203463ms] Jun 4 10:55:34.363: INFO: Created: latency-svc-v8qfz Jun 4 10:55:34.382: INFO: Got endpoints: latency-svc-v8qfz [900.793237ms] Jun 4 10:55:34.437: INFO: Created: latency-svc-t9b82 Jun 4 10:55:34.519: INFO: Got endpoints: latency-svc-t9b82 [1.001574027s] Jun 4 10:55:34.533: INFO: Created: latency-svc-j9nb2 Jun 4 10:55:34.543: INFO: Got endpoints: latency-svc-j9nb2 [935.008913ms] Jun 4 10:55:34.569: INFO: Created: latency-svc-4hzfs Jun 4 10:55:34.580: INFO: Got endpoints: latency-svc-4hzfs [935.665959ms] Jun 4 10:55:34.687: INFO: Created: latency-svc-84wx5 Jun 4 10:55:34.689: INFO: Got endpoints: latency-svc-84wx5 [1.008532533s] Jun 4 10:55:34.743: INFO: Created: latency-svc-cxq77 Jun 4 10:55:34.767: INFO: Got endpoints: latency-svc-cxq77 [1.020440424s] Jun 4 10:55:34.833: INFO: Created: latency-svc-4vstj Jun 4 10:55:34.863: INFO: Got endpoints: latency-svc-4vstj [1.067638998s] Jun 4 10:55:34.892: INFO: Created: latency-svc-sskh7 Jun 4 10:55:34.918: INFO: Got endpoints: latency-svc-sskh7 [1.090168893s] Jun 4 10:55:34.980: INFO: Created: latency-svc-xnqg6 Jun 4 10:55:34.994: INFO: Got endpoints: latency-svc-xnqg6 [1.094788389s] Jun 4 10:55:35.025: INFO: Created: latency-svc-gjjb4 Jun 4 10:55:35.038: INFO: Got endpoints: latency-svc-gjjb4 [1.044968089s] Jun 4 10:55:35.078: INFO: Created: latency-svc-pgwvs Jun 4 10:55:35.130: INFO: Got endpoints: latency-svc-pgwvs [1.072729238s] Jun 4 10:55:35.150: INFO: Created: latency-svc-wvk5p Jun 4 10:55:35.165: INFO: Got endpoints: latency-svc-wvk5p [1.06143321s] Jun 4 10:55:35.199: INFO: Created: latency-svc-mlnwt Jun 4 10:55:35.228: INFO: Got endpoints: latency-svc-mlnwt [1.008801565s] Jun 4 10:55:35.295: INFO: Created: latency-svc-gmrwr Jun 4 10:55:35.323: INFO: Got endpoints: latency-svc-gmrwr [1.078607766s] Jun 4 10:55:35.359: INFO: Created: latency-svc-cjqjm Jun 4 10:55:35.423: INFO: Got endpoints: latency-svc-cjqjm [1.126248945s] Jun 4 10:55:35.431: INFO: Created: latency-svc-dghc5 Jun 4 10:55:35.448: INFO: Got endpoints: latency-svc-dghc5 [1.066577639s] Jun 4 10:55:35.481: INFO: Created: latency-svc-rsdq6 Jun 4 10:55:35.492: INFO: Got endpoints: latency-svc-rsdq6 [972.799308ms] Jun 4 10:55:35.521: INFO: Created: latency-svc-rmsfb Jun 4 10:55:35.567: INFO: Got endpoints: latency-svc-rmsfb [1.023205857s] Jun 4 10:55:35.582: INFO: Created: latency-svc-x56zk Jun 4 10:55:35.600: INFO: Got endpoints: latency-svc-x56zk [1.019652905s] Jun 4 10:55:35.642: INFO: Created: latency-svc-ftcz7 Jun 4 10:55:35.660: INFO: Got endpoints: latency-svc-ftcz7 [970.631837ms] Jun 4 10:55:35.717: INFO: Created: latency-svc-qlm87 Jun 4 10:55:35.726: INFO: Got endpoints: latency-svc-qlm87 [958.792554ms] Jun 4 10:55:35.750: INFO: Created: latency-svc-tzkjr Jun 4 10:55:35.769: INFO: Got endpoints: latency-svc-tzkjr [906.075732ms] Jun 4 10:55:35.792: INFO: Created: latency-svc-rmwfg Jun 4 10:55:35.805: INFO: Got endpoints: latency-svc-rmwfg [886.832164ms] Jun 4 10:55:35.872: INFO: Created: latency-svc-7sgrw Jun 4 10:55:35.900: INFO: Got endpoints: latency-svc-7sgrw [906.026895ms] Jun 4 10:55:35.901: INFO: Created: latency-svc-wkz58 Jun 4 10:55:35.914: INFO: Got endpoints: latency-svc-wkz58 [876.149156ms] Jun 4 10:55:35.947: INFO: Created: latency-svc-5ptbc Jun 4 10:55:35.962: INFO: Got endpoints: latency-svc-5ptbc [832.795801ms] Jun 4 10:55:36.197: INFO: Created: latency-svc-rpr9f Jun 4 10:55:36.494: INFO: Got endpoints: latency-svc-rpr9f [1.32935163s] Jun 4 10:55:36.547: INFO: Created: latency-svc-b7z46 Jun 4 10:55:36.556: INFO: Got endpoints: latency-svc-b7z46 [1.328314196s] Jun 4 10:55:36.576: INFO: Created: latency-svc-ghdtr Jun 4 10:55:36.650: INFO: Got endpoints: latency-svc-ghdtr [1.326881435s] Jun 4 10:55:36.667: INFO: Created: latency-svc-sqczt Jun 4 10:55:36.677: INFO: Got endpoints: latency-svc-sqczt [1.254332221s] Jun 4 10:55:36.721: INFO: Created: latency-svc-n7vsw Jun 4 10:55:36.731: INFO: Got endpoints: latency-svc-n7vsw [1.282949469s] Jun 4 10:55:36.794: INFO: Created: latency-svc-pkrvz Jun 4 10:55:36.797: INFO: Got endpoints: latency-svc-pkrvz [1.305071918s] Jun 4 10:55:36.823: INFO: Created: latency-svc-72hb8 Jun 4 10:55:36.840: INFO: Got endpoints: latency-svc-72hb8 [1.273473602s] Jun 4 10:55:36.870: INFO: Created: latency-svc-dzbdr Jun 4 10:55:36.932: INFO: Got endpoints: latency-svc-dzbdr [1.331972863s] Jun 4 10:55:36.955: INFO: Created: latency-svc-qk7fj Jun 4 10:55:37.008: INFO: Got endpoints: latency-svc-qk7fj [1.348424624s] Jun 4 10:55:37.095: INFO: Created: latency-svc-29x5g Jun 4 10:55:37.098: INFO: Got endpoints: latency-svc-29x5g [1.37257337s] Jun 4 10:55:37.129: INFO: Created: latency-svc-jfwv6 Jun 4 10:55:37.142: INFO: Got endpoints: latency-svc-jfwv6 [1.373154662s] Jun 4 10:55:37.171: INFO: Created: latency-svc-mhzsp Jun 4 10:55:37.184: INFO: Got endpoints: latency-svc-mhzsp [1.379003735s] Jun 4 10:55:37.256: INFO: Created: latency-svc-t6r7c Jun 4 10:55:37.263: INFO: Got endpoints: latency-svc-t6r7c [1.362502596s] Jun 4 10:55:37.284: INFO: Created: latency-svc-pfklv Jun 4 10:55:37.299: INFO: Got endpoints: latency-svc-pfklv [1.384742516s] Jun 4 10:55:37.352: INFO: Created: latency-svc-ghl94 Jun 4 10:55:37.399: INFO: Got endpoints: latency-svc-ghl94 [1.436645023s] Jun 4 10:55:37.411: INFO: Created: latency-svc-6gg8j Jun 4 10:55:37.420: INFO: Got endpoints: latency-svc-6gg8j [924.965647ms] Jun 4 10:55:37.453: INFO: Created: latency-svc-c2cvx Jun 4 10:55:37.482: INFO: Got endpoints: latency-svc-c2cvx [925.848695ms] Jun 4 10:55:37.555: INFO: Created: latency-svc-ndjrn Jun 4 10:55:37.601: INFO: Got endpoints: latency-svc-ndjrn [950.316115ms] Jun 4 10:55:37.646: INFO: Created: latency-svc-tvgzc Jun 4 10:55:37.698: INFO: Got endpoints: latency-svc-tvgzc [1.02140711s] Jun 4 10:55:37.740: INFO: Created: latency-svc-b2h9s Jun 4 10:55:37.757: INFO: Got endpoints: latency-svc-b2h9s [1.02605986s] Jun 4 10:55:37.788: INFO: Created: latency-svc-f6v9x Jun 4 10:55:37.848: INFO: Got endpoints: latency-svc-f6v9x [1.050785539s] Jun 4 10:55:37.885: INFO: Created: latency-svc-5hgdr Jun 4 10:55:37.904: INFO: Got endpoints: latency-svc-5hgdr [1.064196394s] Jun 4 10:55:37.927: INFO: Created: latency-svc-fxg29 Jun 4 10:55:37.946: INFO: Got endpoints: latency-svc-fxg29 [1.014566268s] Jun 4 10:55:38.004: INFO: Created: latency-svc-qz9cp Jun 4 10:55:38.012: INFO: Got endpoints: latency-svc-qz9cp [1.0038717s] Jun 4 10:55:38.040: INFO: Created: latency-svc-d4t4c Jun 4 10:55:38.058: INFO: Got endpoints: latency-svc-d4t4c [959.347853ms] Jun 4 10:55:38.094: INFO: Created: latency-svc-77gmw Jun 4 10:55:38.142: INFO: Got endpoints: latency-svc-77gmw [999.414654ms] Jun 4 10:55:38.150: INFO: Created: latency-svc-lwsdr Jun 4 10:55:38.164: INFO: Got endpoints: latency-svc-lwsdr [979.806887ms] Jun 4 10:55:38.191: INFO: Created: latency-svc-4sx2g Jun 4 10:55:38.208: INFO: Got endpoints: latency-svc-4sx2g [945.500866ms] Jun 4 10:55:38.226: INFO: Created: latency-svc-h8k7h Jun 4 10:55:38.236: INFO: Got endpoints: latency-svc-h8k7h [937.111983ms] Jun 4 10:55:38.292: INFO: Created: latency-svc-btkv7 Jun 4 10:55:38.303: INFO: Got endpoints: latency-svc-btkv7 [904.190724ms] Jun 4 10:55:38.342: INFO: Created: latency-svc-7fr2x Jun 4 10:55:38.363: INFO: Got endpoints: latency-svc-7fr2x [943.814632ms] Jun 4 10:55:38.389: INFO: Created: latency-svc-kn8bt Jun 4 10:55:38.447: INFO: Got endpoints: latency-svc-kn8bt [964.658584ms] Jun 4 10:55:38.461: INFO: Created: latency-svc-cbxnb Jun 4 10:55:38.478: INFO: Got endpoints: latency-svc-cbxnb [877.673862ms] Jun 4 10:55:38.508: INFO: Created: latency-svc-hfpnp Jun 4 10:55:38.538: INFO: Got endpoints: latency-svc-hfpnp [839.014294ms] Jun 4 10:55:38.615: INFO: Created: latency-svc-4gdq4 Jun 4 10:55:38.635: INFO: Got endpoints: latency-svc-4gdq4 [877.630586ms] Jun 4 10:55:38.683: INFO: Created: latency-svc-sp4jn Jun 4 10:55:38.714: INFO: Got endpoints: latency-svc-sp4jn [865.792833ms] Jun 4 10:55:38.784: INFO: Created: latency-svc-ksbb5 Jun 4 10:55:38.804: INFO: Got endpoints: latency-svc-ksbb5 [899.22073ms] Jun 4 10:55:38.838: INFO: Created: latency-svc-5vfh2 Jun 4 10:55:38.932: INFO: Got endpoints: latency-svc-5vfh2 [985.822589ms] Jun 4 10:55:38.947: INFO: Created: latency-svc-k5nrj Jun 4 10:55:38.961: INFO: Got endpoints: latency-svc-k5nrj [948.166621ms] Jun 4 10:55:38.989: INFO: Created: latency-svc-n4skz Jun 4 10:55:39.009: INFO: Got endpoints: latency-svc-n4skz [951.208676ms] Jun 4 10:55:39.106: INFO: Created: latency-svc-xvxhs Jun 4 10:55:39.132: INFO: Got endpoints: latency-svc-xvxhs [990.020202ms] Jun 4 10:55:39.157: INFO: Created: latency-svc-drq6q Jun 4 10:55:39.184: INFO: Got endpoints: latency-svc-drq6q [1.019555338s] Jun 4 10:55:39.277: INFO: Created: latency-svc-rng9d Jun 4 10:55:39.286: INFO: Got endpoints: latency-svc-rng9d [1.077586211s] Jun 4 10:55:39.324: INFO: Created: latency-svc-xxpzz Jun 4 10:55:39.346: INFO: Got endpoints: latency-svc-xxpzz [1.110010027s] Jun 4 10:55:39.366: INFO: Created: latency-svc-mfxvs Jun 4 10:55:39.429: INFO: Got endpoints: latency-svc-mfxvs [1.125710569s] Jun 4 10:55:39.434: INFO: Created: latency-svc-f2knz Jun 4 10:55:39.468: INFO: Created: latency-svc-s8qm8 Jun 4 10:55:39.488: INFO: Got endpoints: latency-svc-f2knz [1.124377701s] Jun 4 10:55:39.491: INFO: Got endpoints: latency-svc-s8qm8 [1.044025504s] Jun 4 10:55:39.511: INFO: Created: latency-svc-9x2rb Jun 4 10:55:39.528: INFO: Got endpoints: latency-svc-9x2rb [1.049299006s] Jun 4 10:55:39.579: INFO: Created: latency-svc-h9h46 Jun 4 10:55:39.588: INFO: Got endpoints: latency-svc-h9h46 [1.050266685s] Jun 4 10:55:39.611: INFO: Created: latency-svc-fjh9f Jun 4 10:55:39.631: INFO: Got endpoints: latency-svc-fjh9f [995.631369ms] Jun 4 10:55:39.666: INFO: Created: latency-svc-dscv5 Jun 4 10:55:39.723: INFO: Got endpoints: latency-svc-dscv5 [1.008866373s] Jun 4 10:55:39.743: INFO: Created: latency-svc-nvz8z Jun 4 10:55:39.762: INFO: Got endpoints: latency-svc-nvz8z [958.547852ms] Jun 4 10:55:39.798: INFO: Created: latency-svc-h99j9 Jun 4 10:55:39.821: INFO: Got endpoints: latency-svc-h99j9 [888.944741ms] Jun 4 10:55:39.888: INFO: Created: latency-svc-j4grf Jun 4 10:55:39.902: INFO: Got endpoints: latency-svc-j4grf [941.651834ms] Jun 4 10:55:39.929: INFO: Created: latency-svc-6x54p Jun 4 10:55:39.944: INFO: Got endpoints: latency-svc-6x54p [935.330023ms] Jun 4 10:55:39.971: INFO: Created: latency-svc-r7qqj Jun 4 10:55:40.022: INFO: Got endpoints: latency-svc-r7qqj [890.185193ms] Jun 4 10:55:40.044: INFO: Created: latency-svc-nsgqb Jun 4 10:55:40.060: INFO: Got endpoints: latency-svc-nsgqb [875.821781ms] Jun 4 10:55:40.086: INFO: Created: latency-svc-xv8k5 Jun 4 10:55:40.102: INFO: Got endpoints: latency-svc-xv8k5 [815.850956ms] Jun 4 10:55:40.159: INFO: Created: latency-svc-27g2v Jun 4 10:55:40.162: INFO: Got endpoints: latency-svc-27g2v [815.822519ms] Jun 4 10:55:40.199: INFO: Created: latency-svc-sdlr4 Jun 4 10:55:40.229: INFO: Got endpoints: latency-svc-sdlr4 [799.387901ms] Jun 4 10:55:40.259: INFO: Created: latency-svc-nntcc Jun 4 10:55:40.310: INFO: Got endpoints: latency-svc-nntcc [821.728903ms] Jun 4 10:55:40.338: INFO: Created: latency-svc-m9smp Jun 4 10:55:40.355: INFO: Got endpoints: latency-svc-m9smp [864.28553ms] Jun 4 10:55:40.392: INFO: Created: latency-svc-fc22h Jun 4 10:55:40.459: INFO: Got endpoints: latency-svc-fc22h [931.478847ms] Jun 4 10:55:40.461: INFO: Created: latency-svc-g5rn7 Jun 4 10:55:40.470: INFO: Got endpoints: latency-svc-g5rn7 [881.846976ms] Jun 4 10:55:40.499: INFO: Created: latency-svc-wpxng Jun 4 10:55:40.519: INFO: Got endpoints: latency-svc-wpxng [887.628287ms] Jun 4 10:55:40.553: INFO: Created: latency-svc-hdrzd Jun 4 10:55:40.602: INFO: Got endpoints: latency-svc-hdrzd [879.867121ms] Jun 4 10:55:40.638: INFO: Created: latency-svc-bknjl Jun 4 10:55:40.669: INFO: Got endpoints: latency-svc-bknjl [907.068518ms] Jun 4 10:55:40.692: INFO: Created: latency-svc-8ckzj Jun 4 10:55:40.772: INFO: Got endpoints: latency-svc-8ckzj [950.31779ms] Jun 4 10:55:40.816: INFO: Created: latency-svc-m5s4c Jun 4 10:55:40.840: INFO: Got endpoints: latency-svc-m5s4c [937.991445ms] Jun 4 10:55:40.908: INFO: Created: latency-svc-99llp Jun 4 10:55:40.929: INFO: Got endpoints: latency-svc-99llp [984.200719ms] Jun 4 10:55:40.991: INFO: Created: latency-svc-bklkk Jun 4 10:55:41.052: INFO: Got endpoints: latency-svc-bklkk [1.029926388s] Jun 4 10:55:41.063: INFO: Created: latency-svc-9b69d Jun 4 10:55:41.079: INFO: Got endpoints: latency-svc-9b69d [1.019547693s] Jun 4 10:55:41.135: INFO: Created: latency-svc-lh7dg Jun 4 10:55:41.195: INFO: Got endpoints: latency-svc-lh7dg [1.093532489s] Jun 4 10:55:41.243: INFO: Created: latency-svc-84vxb Jun 4 10:55:41.278: INFO: Got endpoints: latency-svc-84vxb [1.115574955s] Jun 4 10:55:41.352: INFO: Created: latency-svc-wzqn2 Jun 4 10:55:41.399: INFO: Got endpoints: latency-svc-wzqn2 [1.1699674s] Jun 4 10:55:41.399: INFO: Created: latency-svc-txf25 Jun 4 10:55:41.410: INFO: Got endpoints: latency-svc-txf25 [1.100090485s] Jun 4 10:55:41.495: INFO: Created: latency-svc-kntcw Jun 4 10:55:41.519: INFO: Got endpoints: latency-svc-kntcw [1.163205552s] Jun 4 10:55:41.556: INFO: Created: latency-svc-5t8bt Jun 4 10:55:41.574: INFO: Got endpoints: latency-svc-5t8bt [1.114365129s] Jun 4 10:55:41.639: INFO: Created: latency-svc-qm4c2 Jun 4 10:55:41.645: INFO: Got endpoints: latency-svc-qm4c2 [1.175408462s] Jun 4 10:55:41.674: INFO: Created: latency-svc-795xx Jun 4 10:55:41.688: INFO: Got endpoints: latency-svc-795xx [1.169084163s] Jun 4 10:55:41.716: INFO: Created: latency-svc-77492 Jun 4 10:55:42.543: INFO: Got endpoints: latency-svc-77492 [1.940295383s] Jun 4 10:55:42.557: INFO: Created: latency-svc-4tf8l Jun 4 10:55:42.582: INFO: Got endpoints: latency-svc-4tf8l [1.912426543s] Jun 4 10:55:42.633: INFO: Created: latency-svc-j4rgq Jun 4 10:55:42.699: INFO: Got endpoints: latency-svc-j4rgq [1.927147962s] Jun 4 10:55:42.724: INFO: Created: latency-svc-zs65h Jun 4 10:55:42.744: INFO: Got endpoints: latency-svc-zs65h [1.903733181s] Jun 4 10:55:42.878: INFO: Created: latency-svc-6xv96 Jun 4 10:55:42.892: INFO: Got endpoints: latency-svc-6xv96 [1.963612323s] Jun 4 10:55:42.914: INFO: Created: latency-svc-cktrf Jun 4 10:55:42.928: INFO: Got endpoints: latency-svc-cktrf [1.876421275s] Jun 4 10:55:42.951: INFO: Created: latency-svc-dkxqr Jun 4 10:55:42.965: INFO: Got endpoints: latency-svc-dkxqr [1.885366352s] Jun 4 10:55:43.034: INFO: Created: latency-svc-fxrkd Jun 4 10:55:43.036: INFO: Got endpoints: latency-svc-fxrkd [1.840833883s] Jun 4 10:55:43.113: INFO: Created: latency-svc-t5bd7 Jun 4 10:55:43.243: INFO: Got endpoints: latency-svc-t5bd7 [1.965235043s] Jun 4 10:55:43.245: INFO: Created: latency-svc-hbxss Jun 4 10:55:43.254: INFO: Got endpoints: latency-svc-hbxss [1.854966317s] Jun 4 10:55:43.294: INFO: Created: latency-svc-8jt9m Jun 4 10:55:43.302: INFO: Got endpoints: latency-svc-8jt9m [1.892207911s] Jun 4 10:55:43.324: INFO: Created: latency-svc-pmshz Jun 4 10:55:43.326: INFO: Got endpoints: latency-svc-pmshz [1.807626529s] Jun 4 10:55:43.431: INFO: Created: latency-svc-7ql2t Jun 4 10:55:43.448: INFO: Got endpoints: latency-svc-7ql2t [1.873947728s] Jun 4 10:55:43.466: INFO: Created: latency-svc-lggpr Jun 4 10:55:43.484: INFO: Got endpoints: latency-svc-lggpr [1.838320886s] Jun 4 10:55:43.537: INFO: Created: latency-svc-zktqr Jun 4 10:55:43.543: INFO: Got endpoints: latency-svc-zktqr [1.855623s] Jun 4 10:55:43.582: INFO: Created: latency-svc-p95hm Jun 4 10:55:43.592: INFO: Got endpoints: latency-svc-p95hm [1.048823754s] Jun 4 10:55:43.616: INFO: Created: latency-svc-sv772 Jun 4 10:55:43.692: INFO: Got endpoints: latency-svc-sv772 [1.110553339s] Jun 4 10:55:43.718: INFO: Created: latency-svc-88r8j Jun 4 10:55:43.737: INFO: Got endpoints: latency-svc-88r8j [1.038108186s] Jun 4 10:55:43.763: INFO: Created: latency-svc-cxt7l Jun 4 10:55:43.779: INFO: Got endpoints: latency-svc-cxt7l [1.035282925s] Jun 4 10:55:43.843: INFO: Created: latency-svc-cbxph Jun 4 10:55:43.845: INFO: Got endpoints: latency-svc-cbxph [952.588413ms] Jun 4 10:55:43.892: INFO: Created: latency-svc-wtgg8 Jun 4 10:55:43.923: INFO: Got endpoints: latency-svc-wtgg8 [994.090531ms] Jun 4 10:55:43.986: INFO: Created: latency-svc-4l7sc Jun 4 10:55:43.988: INFO: Got endpoints: latency-svc-4l7sc [1.023576865s] Jun 4 10:55:44.013: INFO: Created: latency-svc-ng9qw Jun 4 10:55:44.027: INFO: Got endpoints: latency-svc-ng9qw [990.633073ms] Jun 4 10:55:44.049: INFO: Created: latency-svc-z2v5m Jun 4 10:55:44.057: INFO: Got endpoints: latency-svc-z2v5m [813.973532ms] Jun 4 10:55:44.079: INFO: Created: latency-svc-z4c75 Jun 4 10:55:44.135: INFO: Got endpoints: latency-svc-z4c75 [881.798415ms] Jun 4 10:55:44.144: INFO: Created: latency-svc-nqdc4 Jun 4 10:55:44.160: INFO: Got endpoints: latency-svc-nqdc4 [857.760398ms] Jun 4 10:55:44.186: INFO: Created: latency-svc-84v8t Jun 4 10:55:44.203: INFO: Got endpoints: latency-svc-84v8t [876.251767ms] Jun 4 10:55:44.222: INFO: Created: latency-svc-fwlrw Jun 4 10:55:44.273: INFO: Got endpoints: latency-svc-fwlrw [825.842367ms] Jun 4 10:55:44.289: INFO: Created: latency-svc-rx4kk Jun 4 10:55:44.305: INFO: Got endpoints: latency-svc-rx4kk [821.824ms] Jun 4 10:55:44.331: INFO: Created: latency-svc-mdprg Jun 4 10:55:44.349: INFO: Got endpoints: latency-svc-mdprg [806.014188ms] Jun 4 10:55:44.419: INFO: Created: latency-svc-nx248 Jun 4 10:55:44.437: INFO: Got endpoints: latency-svc-nx248 [845.625612ms] Jun 4 10:55:44.438: INFO: Latencies: [49.743847ms 94.28882ms 104.029297ms 148.003295ms 188.31694ms 230.500242ms 302.820849ms 345.264763ms 481.02939ms 551.028178ms 595.344631ms 631.861637ms 737.210174ms 799.387901ms 800.943471ms 806.014188ms 813.973532ms 815.822519ms 815.850956ms 821.728903ms 821.824ms 825.842367ms 832.795801ms 837.203463ms 839.014294ms 845.625612ms 853.630557ms 857.760398ms 864.28553ms 865.792833ms 868.515098ms 875.821781ms 876.149156ms 876.251767ms 877.630586ms 877.673862ms 879.867121ms 881.798415ms 881.846976ms 884.255393ms 886.832164ms 887.089239ms 887.628287ms 888.182312ms 888.651602ms 888.944741ms 890.185193ms 899.22073ms 899.725477ms 900.793237ms 902.760373ms 904.190724ms 906.026895ms 906.075732ms 907.068518ms 910.910921ms 912.590402ms 924.965647ms 925.848695ms 931.478847ms 935.008913ms 935.330023ms 935.665959ms 937.111983ms 937.991445ms 941.651834ms 943.814632ms 943.907105ms 945.009654ms 945.500866ms 946.12015ms 948.166621ms 950.316115ms 950.31779ms 951.208676ms 952.588413ms 953.232488ms 958.547852ms 958.792554ms 959.347853ms 964.658584ms 970.631837ms 972.799308ms 977.451248ms 977.649472ms 979.806887ms 984.200719ms 985.822589ms 990.020202ms 990.633073ms 994.090531ms 995.631369ms 999.414654ms 1.000833632s 1.001574027s 1.0038717s 1.008532533s 1.008801565s 1.008866373s 1.014566268s 1.019547693s 1.019555338s 1.019652905s 1.019818192s 1.020440424s 1.02140711s 1.023205857s 1.023576865s 1.02605986s 1.029926388s 1.030477448s 1.03394022s 1.035282925s 1.038108186s 1.044025504s 1.044968089s 1.048552806s 1.048823754s 1.049299006s 1.050266685s 1.050785539s 1.052608106s 1.06143321s 1.062049671s 1.064196394s 1.066577639s 1.066806842s 1.067638998s 1.072729238s 1.076510125s 1.077586211s 1.078607766s 1.082183054s 1.082837845s 1.08807776s 1.090168893s 1.09139662s 1.092031109s 1.093014981s 1.093532489s 1.094788389s 1.100090485s 1.102608772s 1.110010027s 1.110553339s 1.114365129s 1.115574955s 1.124377701s 1.125710569s 1.126248945s 1.147948087s 1.163205552s 1.169084163s 1.1699674s 1.175408462s 1.254332221s 1.273473602s 1.282949469s 1.305071918s 1.326881435s 1.328314196s 1.32935163s 1.331972863s 1.348424624s 1.362502596s 1.37257337s 1.373154662s 1.379003735s 1.384742516s 1.436645023s 1.801290646s 1.807626529s 1.821306831s 1.831468584s 1.838320886s 1.838885425s 1.840833883s 1.841047969s 1.854966317s 1.855623s 1.863457134s 1.86956656s 1.873947728s 1.876421275s 1.885366352s 1.887288045s 1.89113071s 1.892207911s 1.893720205s 1.893887174s 1.899657427s 1.900409797s 1.903733181s 1.910905373s 1.912426543s 1.927147962s 1.929255255s 1.940295383s 1.963612323s 1.965235043s] Jun 4 10:55:44.438: INFO: 50 %ile: 1.019547693s Jun 4 10:55:44.438: INFO: 90 %ile: 1.863457134s Jun 4 10:55:44.438: INFO: 99 %ile: 1.963612323s Jun 4 10:55:44.438: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:55:44.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-2p9pd" for this suite. Jun 4 10:56:14.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:56:14.523: INFO: namespace: e2e-tests-svc-latency-2p9pd, resource: bindings, ignored listing per whitelist Jun 4 10:56:14.547: INFO: namespace e2e-tests-svc-latency-2p9pd deletion completed in 30.089948834s • [SLOW TEST:48.521 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:56:14.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 10:56:14.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-gljkf' Jun 4 10:56:14.746: INFO: stderr: "" Jun 4 10:56:14.746: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jun 4 10:56:14.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gljkf' Jun 4 10:56:21.265: INFO: stderr: "" Jun 4 10:56:21.265: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:56:21.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gljkf" for this suite. Jun 4 10:56:27.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:56:27.346: INFO: namespace: e2e-tests-kubectl-gljkf, resource: bindings, ignored listing per whitelist Jun 4 10:56:27.401: INFO: namespace e2e-tests-kubectl-gljkf deletion completed in 6.124226929s • [SLOW TEST:12.854 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:56:27.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 4 10:56:27.512: INFO: Waiting up to 5m0s for pod "pod-0a15fdf1-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-8wvd7" to be "success or failure" Jun 4 10:56:27.568: INFO: Pod "pod-0a15fdf1-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 55.793158ms Jun 4 10:56:29.571: INFO: Pod "pod-0a15fdf1-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05942441s Jun 4 10:56:31.576: INFO: Pod "pod-0a15fdf1-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063845531s STEP: Saw pod success Jun 4 10:56:31.576: INFO: Pod "pod-0a15fdf1-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:56:31.578: INFO: Trying to get logs from node hunter-worker2 pod pod-0a15fdf1-a652-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 10:56:31.607: INFO: Waiting for pod pod-0a15fdf1-a652-11ea-86dc-0242ac110018 to disappear Jun 4 10:56:31.621: INFO: Pod pod-0a15fdf1-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:56:31.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8wvd7" for this suite. Jun 4 10:56:37.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:56:37.719: INFO: namespace: e2e-tests-emptydir-8wvd7, resource: bindings, ignored listing per whitelist Jun 4 10:56:37.748: INFO: namespace e2e-tests-emptydir-8wvd7 deletion completed in 6.12413437s • [SLOW TEST:10.347 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:56:37.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 10:56:37.952: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 4 10:56:37.962: INFO: Number of nodes with available pods: 0 Jun 4 10:56:37.962: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 4 10:56:37.995: INFO: Number of nodes with available pods: 0 Jun 4 10:56:37.996: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:39.000: INFO: Number of nodes with available pods: 0 Jun 4 10:56:39.000: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:40.107: INFO: Number of nodes with available pods: 0 Jun 4 10:56:40.107: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:41.000: INFO: Number of nodes with available pods: 0 Jun 4 10:56:41.000: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:42.001: INFO: Number of nodes with available pods: 1 Jun 4 10:56:42.001: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 4 10:56:42.043: INFO: Number of nodes with available pods: 1 Jun 4 10:56:42.043: INFO: Number of running nodes: 0, number of available pods: 1 Jun 4 10:56:43.049: INFO: Number of nodes with available pods: 0 Jun 4 10:56:43.049: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 4 10:56:43.064: INFO: Number of nodes with available pods: 0 Jun 4 10:56:43.064: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:44.070: INFO: Number of nodes with available pods: 0 Jun 4 10:56:44.070: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:45.069: INFO: Number of nodes with available pods: 0 Jun 4 10:56:45.070: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:46.068: INFO: Number of nodes with available pods: 0 Jun 4 10:56:46.068: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:47.069: INFO: Number of nodes with available pods: 0 Jun 4 10:56:47.069: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:48.069: INFO: Number of nodes with available pods: 0 Jun 4 10:56:48.069: INFO: Node hunter-worker is running more than one daemon pod Jun 4 10:56:49.070: INFO: Number of nodes with available pods: 1 Jun 4 10:56:49.070: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wwm6j, will wait for the garbage collector to delete the pods Jun 4 10:56:49.136: INFO: Deleting DaemonSet.extensions daemon-set took: 6.434446ms Jun 4 10:56:49.237: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.286972ms Jun 4 10:56:53.339: INFO: Number of nodes with available pods: 0 Jun 4 10:56:53.339: INFO: Number of running nodes: 0, number of available pods: 0 Jun 4 10:56:53.341: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wwm6j/daemonsets","resourceVersion":"14164539"},"items":null} Jun 4 10:56:53.343: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wwm6j/pods","resourceVersion":"14164539"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:56:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-wwm6j" for this suite. Jun 4 10:56:59.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:56:59.493: INFO: namespace: e2e-tests-daemonsets-wwm6j, resource: bindings, ignored listing per whitelist Jun 4 10:56:59.550: INFO: namespace e2e-tests-daemonsets-wwm6j deletion completed in 6.119477474s • [SLOW TEST:21.801 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:56:59.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 4 10:57:04.214: INFO: Successfully updated pod "annotationupdate1d422b7d-a652-11ea-86dc-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:57:06.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bngmd" for this suite. Jun 4 10:57:28.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:57:28.358: INFO: namespace: e2e-tests-projected-bngmd, resource: bindings, ignored listing per whitelist Jun 4 10:57:28.363: INFO: namespace e2e-tests-projected-bngmd deletion completed in 22.114772191s • [SLOW TEST:28.813 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:57:28.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 4 10:57:32.534: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-2e6af3db-a652-11ea-86dc-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-9nkkx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-9nkkx/pods/pod-submit-remove-2e6af3db-a652-11ea-86dc-0242ac110018", UID:"2e6da028-a652-11ea-99e8-0242ac110002", ResourceVersion:"14164672", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726865048, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"451176001"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-v2qp5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000ad3200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-v2qp5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b95968), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0016224e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b959b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b959d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b959d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b959dc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726865048, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726865052, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726865052, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726865048, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.49", StartTime:(*v1.Time)(0xc0011e3c80), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0011e3ca0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://f3eeb3fae1afa85b43bcfe527171bf12456774aba1b782d8f511518d02237ad1"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:57:41.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9nkkx" for this suite. Jun 4 10:57:47.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:57:47.793: INFO: namespace: e2e-tests-pods-9nkkx, resource: bindings, ignored listing per whitelist Jun 4 10:57:47.826: INFO: namespace e2e-tests-pods-9nkkx deletion completed in 6.092840545s • [SLOW TEST:19.463 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:57:47.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 10:57:47.967: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 4 10:57:52.972: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 4 10:57:52.972: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 4 10:57:52.991: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-hbhk9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbhk9/deployments/test-cleanup-deployment,UID:3d097e1b-a652-11ea-99e8-0242ac110002,ResourceVersion:14164744,Generation:1,CreationTimestamp:2020-06-04 10:57:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 4 10:57:53.036: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Jun 4 10:57:53.036: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 4 10:57:53.036: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-hbhk9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbhk9/replicasets/test-cleanup-controller,UID:3a06939f-a652-11ea-99e8-0242ac110002,ResourceVersion:14164745,Generation:1,CreationTimestamp:2020-06-04 10:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3d097e1b-a652-11ea-99e8-0242ac110002 0xc0018da837 0xc0018da838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 4 10:57:53.047: INFO: Pod "test-cleanup-controller-7bcf7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-7bcf7,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-hbhk9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbhk9/pods/test-cleanup-controller-7bcf7,UID:3a0e3abe-a652-11ea-99e8-0242ac110002,ResourceVersion:14164739,Generation:0,CreationTimestamp:2020-06-04 10:57:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3a06939f-a652-11ea-99e8-0242ac110002 0xc0018db427 0xc0018db428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lmpgx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lmpgx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lmpgx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0018db4a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0018db4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 10:57:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 10:57:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 10:57:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 10:57:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.249,StartTime:2020-06-04 10:57:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 10:57:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://15eb8d15bdb6aa75b140b6a28a77236bb8d14810f5466fbde2b45bb2f20bb74b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:57:53.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-hbhk9" for this suite. Jun 4 10:57:59.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:57:59.137: INFO: namespace: e2e-tests-deployment-hbhk9, resource: bindings, ignored listing per whitelist Jun 4 10:57:59.218: INFO: namespace e2e-tests-deployment-hbhk9 deletion completed in 6.116529691s • [SLOW TEST:11.391 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:57:59.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-tmvk STEP: Creating a pod to test atomic-volume-subpath Jun 4 10:57:59.463: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tmvk" in namespace "e2e-tests-subpath-qz8lc" to be "success or failure" Jun 4 10:57:59.479: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.918016ms Jun 4 10:58:01.484: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020035543s Jun 4 10:58:03.488: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024249459s Jun 4 10:58:05.527: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 6.063780274s Jun 4 10:58:07.531: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 8.067821261s Jun 4 10:58:09.536: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 10.072474569s Jun 4 10:58:11.541: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 12.077671257s Jun 4 10:58:13.545: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 14.081983403s Jun 4 10:58:15.569: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 16.105948841s Jun 4 10:58:17.573: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 18.10978042s Jun 4 10:58:19.578: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 20.11422777s Jun 4 10:58:21.582: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 22.11902102s Jun 4 10:58:23.587: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Running", Reason="", readiness=false. Elapsed: 24.123888125s Jun 4 10:58:25.592: INFO: Pod "pod-subpath-test-secret-tmvk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.128375107s STEP: Saw pod success Jun 4 10:58:25.592: INFO: Pod "pod-subpath-test-secret-tmvk" satisfied condition "success or failure" Jun 4 10:58:25.596: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-tmvk container test-container-subpath-secret-tmvk: STEP: delete the pod Jun 4 10:58:25.640: INFO: Waiting for pod pod-subpath-test-secret-tmvk to disappear Jun 4 10:58:25.644: INFO: Pod pod-subpath-test-secret-tmvk no longer exists STEP: Deleting pod pod-subpath-test-secret-tmvk Jun 4 10:58:25.644: INFO: Deleting pod "pod-subpath-test-secret-tmvk" in namespace "e2e-tests-subpath-qz8lc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:58:25.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qz8lc" for this suite. Jun 4 10:58:31.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:58:31.734: INFO: namespace: e2e-tests-subpath-qz8lc, resource: bindings, ignored listing per whitelist Jun 4 10:58:31.749: INFO: namespace e2e-tests-subpath-qz8lc deletion completed in 6.099950066s • [SLOW TEST:32.531 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:58:31.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-5439e1de-a652-11ea-86dc-0242ac110018 STEP: Creating secret with name s-test-opt-upd-5439e30f-a652-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5439e1de-a652-11ea-86dc-0242ac110018 STEP: Updating secret s-test-opt-upd-5439e30f-a652-11ea-86dc-0242ac110018 STEP: Creating secret with name s-test-opt-create-5439e35e-a652-11ea-86dc-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:58:40.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tbtgg" for this suite. Jun 4 10:59:02.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:59:02.167: INFO: namespace: e2e-tests-projected-tbtgg, resource: bindings, ignored listing per whitelist Jun 4 10:59:02.189: INFO: namespace e2e-tests-projected-tbtgg deletion completed in 22.100308385s • [SLOW TEST:30.440 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:59:02.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-66598f5d-a652-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 10:59:02.312: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-zhsfp" to be "success or failure" Jun 4 10:59:02.319: INFO: Pod "pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158811ms Jun 4 10:59:04.325: INFO: Pod "pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012071739s Jun 4 10:59:06.331: INFO: Pod "pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018440687s STEP: Saw pod success Jun 4 10:59:06.331: INFO: Pod "pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:59:06.334: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 4 10:59:06.350: INFO: Waiting for pod pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018 to disappear Jun 4 10:59:06.378: INFO: Pod pod-projected-secrets-665b4730-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:59:06.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zhsfp" for this suite. Jun 4 10:59:12.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:59:12.471: INFO: namespace: e2e-tests-projected-zhsfp, resource: bindings, ignored listing per whitelist Jun 4 10:59:12.516: INFO: namespace e2e-tests-projected-zhsfp deletion completed in 6.134622646s • [SLOW TEST:10.327 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:59:12.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jun 4 10:59:12.632: INFO: Waiting up to 5m0s for pod "var-expansion-6c81c291-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-var-expansion-g7rnt" to be "success or failure" Jun 4 10:59:12.672: INFO: Pod "var-expansion-6c81c291-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 40.183573ms Jun 4 10:59:14.677: INFO: Pod "var-expansion-6c81c291-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044717988s Jun 4 10:59:16.681: INFO: Pod "var-expansion-6c81c291-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048747104s STEP: Saw pod success Jun 4 10:59:16.681: INFO: Pod "var-expansion-6c81c291-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:59:16.683: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-6c81c291-a652-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 10:59:16.801: INFO: Waiting for pod var-expansion-6c81c291-a652-11ea-86dc-0242ac110018 to disappear Jun 4 10:59:16.923: INFO: Pod var-expansion-6c81c291-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:59:16.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-g7rnt" for this suite. Jun 4 10:59:22.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:59:22.960: INFO: namespace: e2e-tests-var-expansion-g7rnt, resource: bindings, ignored listing per whitelist Jun 4 10:59:23.025: INFO: namespace e2e-tests-var-expansion-g7rnt deletion completed in 6.097018875s • [SLOW TEST:10.508 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:59:23.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0604 10:59:33.184616 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 10:59:33.184: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:59:33.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jkwwb" for this suite. Jun 4 10:59:39.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:59:39.229: INFO: namespace: e2e-tests-gc-jkwwb, resource: bindings, ignored listing per whitelist Jun 4 10:59:39.285: INFO: namespace e2e-tests-gc-jkwwb deletion completed in 6.097372534s • [SLOW TEST:16.260 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:59:39.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jun 4 10:59:40.177: INFO: Waiting up to 5m0s for pod "client-containers-7cedf355-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-containers-rm972" to be "success or failure" Jun 4 10:59:40.191: INFO: Pod "client-containers-7cedf355-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.051083ms Jun 4 10:59:42.197: INFO: Pod "client-containers-7cedf355-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019553799s Jun 4 10:59:44.202: INFO: Pod "client-containers-7cedf355-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024918061s STEP: Saw pod success Jun 4 10:59:44.202: INFO: Pod "client-containers-7cedf355-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:59:44.207: INFO: Trying to get logs from node hunter-worker pod client-containers-7cedf355-a652-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 10:59:44.242: INFO: Waiting for pod client-containers-7cedf355-a652-11ea-86dc-0242ac110018 to disappear Jun 4 10:59:44.349: INFO: Pod client-containers-7cedf355-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:59:44.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-rm972" for this suite. Jun 4 10:59:50.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 10:59:50.443: INFO: namespace: e2e-tests-containers-rm972, resource: bindings, ignored listing per whitelist Jun 4 10:59:50.469: INFO: namespace e2e-tests-containers-rm972 deletion completed in 6.115022653s • [SLOW TEST:11.184 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 10:59:50.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 4 10:59:50.594: INFO: Waiting up to 5m0s for pod "pod-8322b535-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-nn5z6" to be "success or failure" Jun 4 10:59:50.610: INFO: Pod "pod-8322b535-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.162913ms Jun 4 10:59:52.702: INFO: Pod "pod-8322b535-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108031715s Jun 4 10:59:54.727: INFO: Pod "pod-8322b535-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132229467s STEP: Saw pod success Jun 4 10:59:54.727: INFO: Pod "pod-8322b535-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 10:59:54.729: INFO: Trying to get logs from node hunter-worker2 pod pod-8322b535-a652-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 10:59:54.775: INFO: Waiting for pod pod-8322b535-a652-11ea-86dc-0242ac110018 to disappear Jun 4 10:59:54.785: INFO: Pod pod-8322b535-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 10:59:54.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nn5z6" for this suite. Jun 4 11:00:00.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:00:00.862: INFO: namespace: e2e-tests-emptydir-nn5z6, resource: bindings, ignored listing per whitelist Jun 4 11:00:00.893: INFO: namespace e2e-tests-emptydir-nn5z6 deletion completed in 6.103926513s • [SLOW TEST:10.424 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:00:00.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 4 11:00:00.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-59hln' Jun 4 11:00:01.250: INFO: stderr: "" Jun 4 11:00:01.250: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 4 11:00:02.255: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:00:02.255: INFO: Found 0 / 1 Jun 4 11:00:03.254: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:00:03.254: INFO: Found 0 / 1 Jun 4 11:00:04.255: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:00:04.255: INFO: Found 0 / 1 Jun 4 11:00:05.255: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:00:05.255: INFO: Found 1 / 1 Jun 4 11:00:05.255: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 4 11:00:05.258: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:00:05.258: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 4 11:00:05.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6x9jx --namespace=e2e-tests-kubectl-59hln -p {"metadata":{"annotations":{"x":"y"}}}' Jun 4 11:00:05.368: INFO: stderr: "" Jun 4 11:00:05.368: INFO: stdout: "pod/redis-master-6x9jx patched\n" STEP: checking annotations Jun 4 11:00:05.409: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:00:05.409: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:00:05.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-59hln" for this suite. Jun 4 11:00:27.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:00:27.445: INFO: namespace: e2e-tests-kubectl-59hln, resource: bindings, ignored listing per whitelist Jun 4 11:00:27.490: INFO: namespace e2e-tests-kubectl-59hln deletion completed in 22.076534706s • [SLOW TEST:26.597 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:00:27.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-lsqk9 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jun 4 11:00:27.638: INFO: Found 0 stateful pods, waiting for 3 Jun 4 11:00:37.643: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:00:37.643: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:00:37.643: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 4 11:00:47.643: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:00:47.643: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:00:47.643: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 4 11:00:47.672: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 4 11:00:57.742: INFO: Updating stateful set ss2 Jun 4 11:00:57.770: INFO: Waiting for Pod e2e-tests-statefulset-lsqk9/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 4 11:01:07.940: INFO: Found 2 stateful pods, waiting for 3 Jun 4 11:01:17.945: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:01:17.945: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:01:17.945: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 4 11:01:17.969: INFO: Updating stateful set ss2 Jun 4 11:01:17.976: INFO: Waiting for Pod e2e-tests-statefulset-lsqk9/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 4 11:01:27.985: INFO: Waiting for Pod e2e-tests-statefulset-lsqk9/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 4 11:01:38.002: INFO: Updating stateful set ss2 Jun 4 11:01:38.018: INFO: Waiting for StatefulSet e2e-tests-statefulset-lsqk9/ss2 to complete update Jun 4 11:01:38.018: INFO: Waiting for Pod e2e-tests-statefulset-lsqk9/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 4 11:01:48.026: INFO: Waiting for StatefulSet e2e-tests-statefulset-lsqk9/ss2 to complete update Jun 4 11:01:48.026: INFO: Waiting for Pod e2e-tests-statefulset-lsqk9/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 4 11:01:58.026: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lsqk9 Jun 4 11:01:58.029: INFO: Scaling statefulset ss2 to 0 Jun 4 11:02:18.046: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 11:02:18.048: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:02:18.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-lsqk9" for this suite. Jun 4 11:02:24.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:02:24.119: INFO: namespace: e2e-tests-statefulset-lsqk9, resource: bindings, ignored listing per whitelist Jun 4 11:02:24.170: INFO: namespace e2e-tests-statefulset-lsqk9 deletion completed in 6.099774363s • [SLOW TEST:116.680 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:02:24.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 4 11:02:24.324: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-64tdr,SelfLink:/api/v1/namespaces/e2e-tests-watch-64tdr/configmaps/e2e-watch-test-watch-closed,UID:debf6de9-a652-11ea-99e8-0242ac110002,ResourceVersion:14165824,Generation:0,CreationTimestamp:2020-06-04 11:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 4 11:02:24.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-64tdr,SelfLink:/api/v1/namespaces/e2e-tests-watch-64tdr/configmaps/e2e-watch-test-watch-closed,UID:debf6de9-a652-11ea-99e8-0242ac110002,ResourceVersion:14165825,Generation:0,CreationTimestamp:2020-06-04 11:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 4 11:02:24.360: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-64tdr,SelfLink:/api/v1/namespaces/e2e-tests-watch-64tdr/configmaps/e2e-watch-test-watch-closed,UID:debf6de9-a652-11ea-99e8-0242ac110002,ResourceVersion:14165826,Generation:0,CreationTimestamp:2020-06-04 11:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 4 11:02:24.360: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-64tdr,SelfLink:/api/v1/namespaces/e2e-tests-watch-64tdr/configmaps/e2e-watch-test-watch-closed,UID:debf6de9-a652-11ea-99e8-0242ac110002,ResourceVersion:14165827,Generation:0,CreationTimestamp:2020-06-04 11:02:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:02:24.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-64tdr" for this suite. Jun 4 11:02:30.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:02:30.442: INFO: namespace: e2e-tests-watch-64tdr, resource: bindings, ignored listing per whitelist Jun 4 11:02:30.476: INFO: namespace e2e-tests-watch-64tdr deletion completed in 6.111948413s • [SLOW TEST:6.306 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:02:30.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 4 11:02:30.606: INFO: Waiting up to 5m0s for pod "downward-api-e2832c75-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-p85zx" to be "success or failure" Jun 4 11:02:30.626: INFO: Pod "downward-api-e2832c75-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.583958ms Jun 4 11:02:32.729: INFO: Pod "downward-api-e2832c75-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122866126s Jun 4 11:02:34.733: INFO: Pod "downward-api-e2832c75-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127317621s STEP: Saw pod success Jun 4 11:02:34.733: INFO: Pod "downward-api-e2832c75-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:02:34.737: INFO: Trying to get logs from node hunter-worker2 pod downward-api-e2832c75-a652-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 11:02:34.755: INFO: Waiting for pod downward-api-e2832c75-a652-11ea-86dc-0242ac110018 to disappear Jun 4 11:02:34.760: INFO: Pod downward-api-e2832c75-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:02:34.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p85zx" for this suite. Jun 4 11:02:40.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:02:40.826: INFO: namespace: e2e-tests-downward-api-p85zx, resource: bindings, ignored listing per whitelist Jun 4 11:02:40.877: INFO: namespace e2e-tests-downward-api-p85zx deletion completed in 6.093365606s • [SLOW TEST:10.400 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:02:40.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 4 11:02:41.007: INFO: Waiting up to 5m0s for pod "pod-e8b62137-a652-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-52mpm" to be "success or failure" Jun 4 11:02:41.016: INFO: Pod "pod-e8b62137-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.925901ms Jun 4 11:02:43.020: INFO: Pod "pod-e8b62137-a652-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013548448s Jun 4 11:02:45.025: INFO: Pod "pod-e8b62137-a652-11ea-86dc-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.017988464s Jun 4 11:02:47.029: INFO: Pod "pod-e8b62137-a652-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022555104s STEP: Saw pod success Jun 4 11:02:47.029: INFO: Pod "pod-e8b62137-a652-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:02:47.032: INFO: Trying to get logs from node hunter-worker pod pod-e8b62137-a652-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:02:47.073: INFO: Waiting for pod pod-e8b62137-a652-11ea-86dc-0242ac110018 to disappear Jun 4 11:02:47.083: INFO: Pod pod-e8b62137-a652-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:02:47.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-52mpm" for this suite. Jun 4 11:02:53.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:02:53.155: INFO: namespace: e2e-tests-emptydir-52mpm, resource: bindings, ignored listing per whitelist Jun 4 11:02:53.217: INFO: namespace e2e-tests-emptydir-52mpm deletion completed in 6.126836161s • [SLOW TEST:12.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:02:53.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 4 11:02:53.385: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-d26n8,SelfLink:/api/v1/namespaces/e2e-tests-watch-d26n8/configmaps/e2e-watch-test-resource-version,UID:f01351d4-a652-11ea-99e8-0242ac110002,ResourceVersion:14165943,Generation:0,CreationTimestamp:2020-06-04 11:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 4 11:02:53.385: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-d26n8,SelfLink:/api/v1/namespaces/e2e-tests-watch-d26n8/configmaps/e2e-watch-test-resource-version,UID:f01351d4-a652-11ea-99e8-0242ac110002,ResourceVersion:14165944,Generation:0,CreationTimestamp:2020-06-04 11:02:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:02:53.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-d26n8" for this suite. Jun 4 11:02:59.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:02:59.492: INFO: namespace: e2e-tests-watch-d26n8, resource: bindings, ignored listing per whitelist Jun 4 11:02:59.494: INFO: namespace e2e-tests-watch-d26n8 deletion completed in 6.091804205s • [SLOW TEST:6.277 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:02:59.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 11:02:59.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b7zvr' Jun 4 11:03:02.706: INFO: stderr: "" Jun 4 11:03:02.706: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 4 11:03:07.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b7zvr -o json' Jun 4 11:03:07.863: INFO: stderr: "" Jun 4 11:03:07.863: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-04T11:03:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-b7zvr\",\n \"resourceVersion\": \"14165984\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-b7zvr/pods/e2e-test-nginx-pod\",\n \"uid\": \"f5a3a9bf-a652-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hxb27\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hxb27\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hxb27\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-04T11:03:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-04T11:03:05Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-04T11:03:05Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-04T11:03:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://331d76d343b412ea7280c5baf27a0549f78013885ea5fd8205539a47dc3853fd\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-04T11:03:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.59\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-04T11:03:02Z\"\n }\n}\n" STEP: replace the image in the pod Jun 4 11:03:07.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-b7zvr' Jun 4 11:03:08.115: INFO: stderr: "" Jun 4 11:03:08.115: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jun 4 11:03:08.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-b7zvr' Jun 4 11:03:11.890: INFO: stderr: "" Jun 4 11:03:11.890: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:03:11.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b7zvr" for this suite. Jun 4 11:03:17.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:03:17.947: INFO: namespace: e2e-tests-kubectl-b7zvr, resource: bindings, ignored listing per whitelist Jun 4 11:03:17.988: INFO: namespace e2e-tests-kubectl-b7zvr deletion completed in 6.095018465s • [SLOW TEST:18.494 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:03:17.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 4 11:03:18.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:18.426: INFO: stderr: "" Jun 4 11:03:18.426: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 11:03:18.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:18.538: INFO: stderr: "" Jun 4 11:03:18.538: INFO: stdout: "update-demo-nautilus-2nvkx update-demo-nautilus-mnhpc " Jun 4 11:03:18.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nvkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:18.667: INFO: stderr: "" Jun 4 11:03:18.667: INFO: stdout: "" Jun 4 11:03:18.667: INFO: update-demo-nautilus-2nvkx is created but not running Jun 4 11:03:23.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:23.785: INFO: stderr: "" Jun 4 11:03:23.785: INFO: stdout: "update-demo-nautilus-2nvkx update-demo-nautilus-mnhpc " Jun 4 11:03:23.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nvkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:23.881: INFO: stderr: "" Jun 4 11:03:23.881: INFO: stdout: "true" Jun 4 11:03:23.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2nvkx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:23.983: INFO: stderr: "" Jun 4 11:03:23.983: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:03:23.983: INFO: validating pod update-demo-nautilus-2nvkx Jun 4 11:03:24.001: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:03:24.001: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:03:24.001: INFO: update-demo-nautilus-2nvkx is verified up and running Jun 4 11:03:24.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnhpc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:24.117: INFO: stderr: "" Jun 4 11:03:24.117: INFO: stdout: "true" Jun 4 11:03:24.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mnhpc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:24.225: INFO: stderr: "" Jun 4 11:03:24.225: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:03:24.225: INFO: validating pod update-demo-nautilus-mnhpc Jun 4 11:03:24.243: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:03:24.243: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:03:24.243: INFO: update-demo-nautilus-mnhpc is verified up and running STEP: using delete to clean up resources Jun 4 11:03:24.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:24.352: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:03:24.352: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 4 11:03:24.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-df8pv' Jun 4 11:03:24.461: INFO: stderr: "No resources found.\n" Jun 4 11:03:24.461: INFO: stdout: "" Jun 4 11:03:24.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-df8pv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 4 11:03:24.562: INFO: stderr: "" Jun 4 11:03:24.562: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:03:24.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-df8pv" for this suite. Jun 4 11:03:46.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:03:46.646: INFO: namespace: e2e-tests-kubectl-df8pv, resource: bindings, ignored listing per whitelist Jun 4 11:03:46.655: INFO: namespace e2e-tests-kubectl-df8pv deletion completed in 22.089437889s • [SLOW TEST:28.667 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:03:46.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0604 11:03:57.895930 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 11:03:57.896: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:03:57.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-srt8z" for this suite. Jun 4 11:04:05.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:04:05.935: INFO: namespace: e2e-tests-gc-srt8z, resource: bindings, ignored listing per whitelist Jun 4 11:04:05.994: INFO: namespace e2e-tests-gc-srt8z deletion completed in 8.094851616s • [SLOW TEST:19.339 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:04:05.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-p5xt6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p5xt6 to expose endpoints map[] Jun 4 11:04:06.170: INFO: Get endpoints failed (15.288409ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 4 11:04:07.173: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p5xt6 exposes endpoints map[] (1.018326037s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-p5xt6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p5xt6 to expose endpoints map[pod1:[80]] Jun 4 11:04:11.241: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p5xt6 exposes endpoints map[pod1:[80]] (4.062330026s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-p5xt6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p5xt6 to expose endpoints map[pod2:[80] pod1:[80]] Jun 4 11:04:14.319: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p5xt6 exposes endpoints map[pod1:[80] pod2:[80]] (3.072844859s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-p5xt6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p5xt6 to expose endpoints map[pod2:[80]] Jun 4 11:04:15.345: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p5xt6 exposes endpoints map[pod2:[80]] (1.019772951s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-p5xt6 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-p5xt6 to expose endpoints map[] Jun 4 11:04:16.371: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-p5xt6 exposes endpoints map[] (1.022797934s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:04:16.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-p5xt6" for this suite. Jun 4 11:04:22.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:04:22.572: INFO: namespace: e2e-tests-services-p5xt6, resource: bindings, ignored listing per whitelist Jun 4 11:04:22.640: INFO: namespace e2e-tests-services-p5xt6 deletion completed in 6.103778613s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:16.645 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:04:22.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 11:04:22.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-cct9k' Jun 4 11:04:22.869: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 4 11:04:22.869: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 4 11:04:24.892: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-z44q8] Jun 4 11:04:24.892: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-z44q8" in namespace "e2e-tests-kubectl-cct9k" to be "running and ready" Jun 4 11:04:24.895: INFO: Pod "e2e-test-nginx-rc-z44q8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.106425ms Jun 4 11:04:26.900: INFO: Pod "e2e-test-nginx-rc-z44q8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007619838s Jun 4 11:04:26.900: INFO: Pod "e2e-test-nginx-rc-z44q8" satisfied condition "running and ready" Jun 4 11:04:26.900: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-z44q8] Jun 4 11:04:26.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cct9k' Jun 4 11:04:27.020: INFO: stderr: "" Jun 4 11:04:27.020: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jun 4 11:04:27.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-cct9k' Jun 4 11:04:27.133: INFO: stderr: "" Jun 4 11:04:27.133: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:04:27.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cct9k" for this suite. Jun 4 11:04:49.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:04:49.247: INFO: namespace: e2e-tests-kubectl-cct9k, resource: bindings, ignored listing per whitelist Jun 4 11:04:49.260: INFO: namespace e2e-tests-kubectl-cct9k deletion completed in 22.123083553s • [SLOW TEST:26.620 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:04:49.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-353ab495-a653-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:04:49.387: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-pfmgt" to be "success or failure" Jun 4 11:04:49.401: INFO: Pod "pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.915952ms Jun 4 11:04:51.405: INFO: Pod "pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017793328s Jun 4 11:04:53.623: INFO: Pod "pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235560583s STEP: Saw pod success Jun 4 11:04:53.623: INFO: Pod "pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:04:53.626: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 4 11:04:53.711: INFO: Waiting for pod pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018 to disappear Jun 4 11:04:53.760: INFO: Pod pod-projected-secrets-353b5e6d-a653-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:04:53.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pfmgt" for this suite. Jun 4 11:04:59.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:04:59.918: INFO: namespace: e2e-tests-projected-pfmgt, resource: bindings, ignored listing per whitelist Jun 4 11:04:59.942: INFO: namespace e2e-tests-projected-pfmgt deletion completed in 6.177817175s • [SLOW TEST:10.681 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:04:59.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-h9vjr.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-h9vjr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-h9vjr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-h9vjr.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-h9vjr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-h9vjr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 4 11:05:06.107: INFO: DNS probes using e2e-tests-dns-h9vjr/dns-test-3b928cef-a653-11ea-86dc-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:05:06.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-h9vjr" for this suite. Jun 4 11:05:12.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:05:12.218: INFO: namespace: e2e-tests-dns-h9vjr, resource: bindings, ignored listing per whitelist Jun 4 11:05:12.267: INFO: namespace e2e-tests-dns-h9vjr deletion completed in 6.100253849s • [SLOW TEST:12.325 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:05:12.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-gcgfj [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jun 4 11:05:12.395: INFO: Found 0 stateful pods, waiting for 3 Jun 4 11:05:22.400: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:05:22.400: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:05:22.400: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:05:22.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gcgfj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 11:05:22.694: INFO: stderr: "I0604 11:05:22.531934 851 log.go:172] (0xc0006ce4d0) (0xc00057d4a0) Create stream\nI0604 11:05:22.531993 851 log.go:172] (0xc0006ce4d0) (0xc00057d4a0) Stream added, broadcasting: 1\nI0604 11:05:22.534637 851 log.go:172] (0xc0006ce4d0) Reply frame received for 1\nI0604 11:05:22.534690 851 log.go:172] (0xc0006ce4d0) (0xc000730000) Create stream\nI0604 11:05:22.534701 851 log.go:172] (0xc0006ce4d0) (0xc000730000) Stream added, broadcasting: 3\nI0604 11:05:22.535524 851 log.go:172] (0xc0006ce4d0) Reply frame received for 3\nI0604 11:05:22.535560 851 log.go:172] (0xc0006ce4d0) (0xc00059a000) Create stream\nI0604 11:05:22.535575 851 log.go:172] (0xc0006ce4d0) (0xc00059a000) Stream added, broadcasting: 5\nI0604 11:05:22.536333 851 log.go:172] (0xc0006ce4d0) Reply frame received for 5\nI0604 11:05:22.687385 851 log.go:172] (0xc0006ce4d0) Data frame received for 3\nI0604 11:05:22.687425 851 log.go:172] (0xc000730000) (3) Data frame handling\nI0604 11:05:22.687451 851 log.go:172] (0xc000730000) (3) Data frame sent\nI0604 11:05:22.687463 851 log.go:172] (0xc0006ce4d0) Data frame received for 3\nI0604 11:05:22.687473 851 log.go:172] (0xc000730000) (3) Data frame handling\nI0604 11:05:22.687584 851 log.go:172] (0xc0006ce4d0) Data frame received for 5\nI0604 11:05:22.687615 851 log.go:172] (0xc00059a000) (5) Data frame handling\nI0604 11:05:22.689639 851 log.go:172] (0xc0006ce4d0) Data frame received for 1\nI0604 11:05:22.689684 851 log.go:172] (0xc00057d4a0) (1) Data frame handling\nI0604 11:05:22.689712 851 log.go:172] (0xc00057d4a0) (1) Data frame sent\nI0604 11:05:22.689735 851 log.go:172] (0xc0006ce4d0) (0xc00057d4a0) Stream removed, broadcasting: 1\nI0604 11:05:22.689773 851 log.go:172] (0xc0006ce4d0) Go away received\nI0604 11:05:22.689924 851 log.go:172] (0xc0006ce4d0) (0xc00057d4a0) Stream removed, broadcasting: 1\nI0604 11:05:22.689945 851 log.go:172] (0xc0006ce4d0) (0xc000730000) Stream removed, broadcasting: 3\nI0604 11:05:22.689951 851 log.go:172] (0xc0006ce4d0) (0xc00059a000) Stream removed, broadcasting: 5\n" Jun 4 11:05:22.694: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 11:05:22.694: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 4 11:05:32.729: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 4 11:05:42.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gcgfj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:05:42.984: INFO: stderr: "I0604 11:05:42.899707 873 log.go:172] (0xc000138790) (0xc0005c74a0) Create stream\nI0604 11:05:42.899790 873 log.go:172] (0xc000138790) (0xc0005c74a0) Stream added, broadcasting: 1\nI0604 11:05:42.902982 873 log.go:172] (0xc000138790) Reply frame received for 1\nI0604 11:05:42.903056 873 log.go:172] (0xc000138790) (0xc0002f4000) Create stream\nI0604 11:05:42.903086 873 log.go:172] (0xc000138790) (0xc0002f4000) Stream added, broadcasting: 3\nI0604 11:05:42.904096 873 log.go:172] (0xc000138790) Reply frame received for 3\nI0604 11:05:42.904135 873 log.go:172] (0xc000138790) (0xc0005c7540) Create stream\nI0604 11:05:42.904149 873 log.go:172] (0xc000138790) (0xc0005c7540) Stream added, broadcasting: 5\nI0604 11:05:42.904980 873 log.go:172] (0xc000138790) Reply frame received for 5\nI0604 11:05:42.977487 873 log.go:172] (0xc000138790) Data frame received for 5\nI0604 11:05:42.977541 873 log.go:172] (0xc000138790) Data frame received for 3\nI0604 11:05:42.977592 873 log.go:172] (0xc0002f4000) (3) Data frame handling\nI0604 11:05:42.977617 873 log.go:172] (0xc0002f4000) (3) Data frame sent\nI0604 11:05:42.977638 873 log.go:172] (0xc000138790) Data frame received for 3\nI0604 11:05:42.977652 873 log.go:172] (0xc0002f4000) (3) Data frame handling\nI0604 11:05:42.977672 873 log.go:172] (0xc0005c7540) (5) Data frame handling\nI0604 11:05:42.979362 873 log.go:172] (0xc000138790) Data frame received for 1\nI0604 11:05:42.979395 873 log.go:172] (0xc0005c74a0) (1) Data frame handling\nI0604 11:05:42.979430 873 log.go:172] (0xc0005c74a0) (1) Data frame sent\nI0604 11:05:42.979465 873 log.go:172] (0xc000138790) (0xc0005c74a0) Stream removed, broadcasting: 1\nI0604 11:05:42.979494 873 log.go:172] (0xc000138790) Go away received\nI0604 11:05:42.979718 873 log.go:172] (0xc000138790) (0xc0005c74a0) Stream removed, broadcasting: 1\nI0604 11:05:42.979749 873 log.go:172] (0xc000138790) (0xc0002f4000) Stream removed, broadcasting: 3\nI0604 11:05:42.979762 873 log.go:172] (0xc000138790) (0xc0005c7540) Stream removed, broadcasting: 5\n" Jun 4 11:05:42.984: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 11:05:42.984: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 11:06:13.003: INFO: Waiting for StatefulSet e2e-tests-statefulset-gcgfj/ss2 to complete update STEP: Rolling back to a previous revision Jun 4 11:06:23.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gcgfj ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 11:06:23.293: INFO: stderr: "I0604 11:06:23.146672 895 log.go:172] (0xc000602420) (0xc0005cf2c0) Create stream\nI0604 11:06:23.146753 895 log.go:172] (0xc000602420) (0xc0005cf2c0) Stream added, broadcasting: 1\nI0604 11:06:23.149939 895 log.go:172] (0xc000602420) Reply frame received for 1\nI0604 11:06:23.149980 895 log.go:172] (0xc000602420) (0xc000318000) Create stream\nI0604 11:06:23.150000 895 log.go:172] (0xc000602420) (0xc000318000) Stream added, broadcasting: 3\nI0604 11:06:23.151072 895 log.go:172] (0xc000602420) Reply frame received for 3\nI0604 11:06:23.151129 895 log.go:172] (0xc000602420) (0xc0006e4000) Create stream\nI0604 11:06:23.151142 895 log.go:172] (0xc000602420) (0xc0006e4000) Stream added, broadcasting: 5\nI0604 11:06:23.152068 895 log.go:172] (0xc000602420) Reply frame received for 5\nI0604 11:06:23.284030 895 log.go:172] (0xc000602420) Data frame received for 5\nI0604 11:06:23.284102 895 log.go:172] (0xc0006e4000) (5) Data frame handling\nI0604 11:06:23.284143 895 log.go:172] (0xc000602420) Data frame received for 3\nI0604 11:06:23.284169 895 log.go:172] (0xc000318000) (3) Data frame handling\nI0604 11:06:23.284185 895 log.go:172] (0xc000318000) (3) Data frame sent\nI0604 11:06:23.284333 895 log.go:172] (0xc000602420) Data frame received for 3\nI0604 11:06:23.284371 895 log.go:172] (0xc000318000) (3) Data frame handling\nI0604 11:06:23.286345 895 log.go:172] (0xc000602420) Data frame received for 1\nI0604 11:06:23.286367 895 log.go:172] (0xc0005cf2c0) (1) Data frame handling\nI0604 11:06:23.286397 895 log.go:172] (0xc0005cf2c0) (1) Data frame sent\nI0604 11:06:23.286440 895 log.go:172] (0xc000602420) (0xc0005cf2c0) Stream removed, broadcasting: 1\nI0604 11:06:23.286657 895 log.go:172] (0xc000602420) Go away received\nI0604 11:06:23.287008 895 log.go:172] (0xc000602420) (0xc0005cf2c0) Stream removed, broadcasting: 1\nI0604 11:06:23.287032 895 log.go:172] (0xc000602420) (0xc000318000) Stream removed, broadcasting: 3\nI0604 11:06:23.287044 895 log.go:172] (0xc000602420) (0xc0006e4000) Stream removed, broadcasting: 5\n" Jun 4 11:06:23.293: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 11:06:23.293: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 11:06:33.323: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 4 11:06:43.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gcgfj ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:06:43.577: INFO: stderr: "I0604 11:06:43.480333 918 log.go:172] (0xc0007304d0) (0xc000678640) Create stream\nI0604 11:06:43.480391 918 log.go:172] (0xc0007304d0) (0xc000678640) Stream added, broadcasting: 1\nI0604 11:06:43.483157 918 log.go:172] (0xc0007304d0) Reply frame received for 1\nI0604 11:06:43.483218 918 log.go:172] (0xc0007304d0) (0xc00033adc0) Create stream\nI0604 11:06:43.483243 918 log.go:172] (0xc0007304d0) (0xc00033adc0) Stream added, broadcasting: 3\nI0604 11:06:43.484333 918 log.go:172] (0xc0007304d0) Reply frame received for 3\nI0604 11:06:43.484391 918 log.go:172] (0xc0007304d0) (0xc00033af00) Create stream\nI0604 11:06:43.484405 918 log.go:172] (0xc0007304d0) (0xc00033af00) Stream added, broadcasting: 5\nI0604 11:06:43.485580 918 log.go:172] (0xc0007304d0) Reply frame received for 5\nI0604 11:06:43.571032 918 log.go:172] (0xc0007304d0) Data frame received for 5\nI0604 11:06:43.571057 918 log.go:172] (0xc00033af00) (5) Data frame handling\nI0604 11:06:43.571076 918 log.go:172] (0xc0007304d0) Data frame received for 3\nI0604 11:06:43.571081 918 log.go:172] (0xc00033adc0) (3) Data frame handling\nI0604 11:06:43.571089 918 log.go:172] (0xc00033adc0) (3) Data frame sent\nI0604 11:06:43.571094 918 log.go:172] (0xc0007304d0) Data frame received for 3\nI0604 11:06:43.571105 918 log.go:172] (0xc00033adc0) (3) Data frame handling\nI0604 11:06:43.572430 918 log.go:172] (0xc0007304d0) Data frame received for 1\nI0604 11:06:43.572455 918 log.go:172] (0xc000678640) (1) Data frame handling\nI0604 11:06:43.572472 918 log.go:172] (0xc000678640) (1) Data frame sent\nI0604 11:06:43.572490 918 log.go:172] (0xc0007304d0) (0xc000678640) Stream removed, broadcasting: 1\nI0604 11:06:43.572543 918 log.go:172] (0xc0007304d0) Go away received\nI0604 11:06:43.572723 918 log.go:172] (0xc0007304d0) (0xc000678640) Stream removed, broadcasting: 1\nI0604 11:06:43.572738 918 log.go:172] (0xc0007304d0) (0xc00033adc0) Stream removed, broadcasting: 3\nI0604 11:06:43.572745 918 log.go:172] (0xc0007304d0) (0xc00033af00) Stream removed, broadcasting: 5\n" Jun 4 11:06:43.577: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 11:06:43.577: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 4 11:07:13.598: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gcgfj Jun 4 11:07:13.600: INFO: Scaling statefulset ss2 to 0 Jun 4 11:07:33.617: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 11:07:33.620: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:07:33.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-gcgfj" for this suite. Jun 4 11:07:39.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:07:39.734: INFO: namespace: e2e-tests-statefulset-gcgfj, resource: bindings, ignored listing per whitelist Jun 4 11:07:39.772: INFO: namespace e2e-tests-statefulset-gcgfj deletion completed in 6.109387197s • [SLOW TEST:147.504 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:07:39.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 11:07:39.877: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.378779ms) Jun 4 11:07:39.881: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.904533ms) Jun 4 11:07:39.884: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.882301ms) Jun 4 11:07:39.888: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.338741ms) Jun 4 11:07:39.892: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.783316ms) Jun 4 11:07:39.896: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.957989ms) Jun 4 11:07:39.899: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.814119ms) Jun 4 11:07:39.903: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.97812ms) Jun 4 11:07:39.926: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 22.355641ms) Jun 4 11:07:39.930: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.370214ms) Jun 4 11:07:39.934: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.917082ms) Jun 4 11:07:39.938: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.92682ms) Jun 4 11:07:39.942: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.778675ms) Jun 4 11:07:39.946: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.563682ms) Jun 4 11:07:39.949: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.310765ms) Jun 4 11:07:39.952: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.051405ms) Jun 4 11:07:39.955: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.97973ms) Jun 4 11:07:39.958: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.205724ms) Jun 4 11:07:39.962: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.345497ms) Jun 4 11:07:39.965: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.563177ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:07:39.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-vk9v4" for this suite. Jun 4 11:07:45.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:07:46.015: INFO: namespace: e2e-tests-proxy-vk9v4, resource: bindings, ignored listing per whitelist Jun 4 11:07:46.064: INFO: namespace e2e-tests-proxy-vk9v4 deletion completed in 6.094900509s • [SLOW TEST:6.292 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:07:46.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 4 11:07:56.251: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:07:56.296: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:07:58.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:07:58.302: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:00.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:00.301: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:02.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:02.301: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:04.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:04.302: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:06.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:06.301: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:08.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:08.301: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:10.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:10.302: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:12.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:12.301: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:14.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:14.300: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:16.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:16.301: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:18.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:18.300: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:20.297: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:20.308: INFO: Pod pod-with-poststart-exec-hook still exists Jun 4 11:08:22.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 4 11:08:22.301: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:08:22.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8kt47" for this suite. Jun 4 11:08:44.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:08:44.369: INFO: namespace: e2e-tests-container-lifecycle-hook-8kt47, resource: bindings, ignored listing per whitelist Jun 4 11:08:44.401: INFO: namespace e2e-tests-container-lifecycle-hook-8kt47 deletion completed in 22.094565277s • [SLOW TEST:58.337 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:08:44.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-c1665420-a653-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:08:44.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-b9cbs" to be "success or failure" Jun 4 11:08:44.632: INFO: Pod "pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 64.913096ms Jun 4 11:08:46.643: INFO: Pod "pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076152874s Jun 4 11:08:48.648: INFO: Pod "pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080290804s STEP: Saw pod success Jun 4 11:08:48.648: INFO: Pod "pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:08:48.650: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 11:08:48.729: INFO: Waiting for pod pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018 to disappear Jun 4 11:08:48.757: INFO: Pod pod-projected-configmaps-c167ffc6-a653-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:08:48.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b9cbs" for this suite. Jun 4 11:08:54.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:08:54.870: INFO: namespace: e2e-tests-projected-b9cbs, resource: bindings, ignored listing per whitelist Jun 4 11:08:54.890: INFO: namespace e2e-tests-projected-b9cbs deletion completed in 6.129464538s • [SLOW TEST:10.490 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:08:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xdkxc STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 4 11:08:55.005: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 4 11:09:25.253: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.75 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xdkxc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:09:25.253: INFO: >>> kubeConfig: /root/.kube/config I0604 11:09:25.281853 6 log.go:172] (0xc0006e3d90) (0xc001bfe640) Create stream I0604 11:09:25.281879 6 log.go:172] (0xc0006e3d90) (0xc001bfe640) Stream added, broadcasting: 1 I0604 11:09:25.284004 6 log.go:172] (0xc0006e3d90) Reply frame received for 1 I0604 11:09:25.284041 6 log.go:172] (0xc0006e3d90) (0xc001ba8000) Create stream I0604 11:09:25.284050 6 log.go:172] (0xc0006e3d90) (0xc001ba8000) Stream added, broadcasting: 3 I0604 11:09:25.285252 6 log.go:172] (0xc0006e3d90) Reply frame received for 3 I0604 11:09:25.285274 6 log.go:172] (0xc0006e3d90) (0xc0018570e0) Create stream I0604 11:09:25.285289 6 log.go:172] (0xc0006e3d90) (0xc0018570e0) Stream added, broadcasting: 5 I0604 11:09:25.286332 6 log.go:172] (0xc0006e3d90) Reply frame received for 5 I0604 11:09:26.373740 6 log.go:172] (0xc0006e3d90) Data frame received for 3 I0604 11:09:26.373797 6 log.go:172] (0xc001ba8000) (3) Data frame handling I0604 11:09:26.373843 6 log.go:172] (0xc001ba8000) (3) Data frame sent I0604 11:09:26.374209 6 log.go:172] (0xc0006e3d90) Data frame received for 3 I0604 11:09:26.374291 6 log.go:172] (0xc001ba8000) (3) Data frame handling I0604 11:09:26.374336 6 log.go:172] (0xc0006e3d90) Data frame received for 5 I0604 11:09:26.374360 6 log.go:172] (0xc0018570e0) (5) Data frame handling I0604 11:09:26.376562 6 log.go:172] (0xc0006e3d90) Data frame received for 1 I0604 11:09:26.376639 6 log.go:172] (0xc001bfe640) (1) Data frame handling I0604 11:09:26.376710 6 log.go:172] (0xc001bfe640) (1) Data frame sent I0604 11:09:26.376780 6 log.go:172] (0xc0006e3d90) (0xc001bfe640) Stream removed, broadcasting: 1 I0604 11:09:26.376852 6 log.go:172] (0xc0006e3d90) Go away received I0604 11:09:26.377421 6 log.go:172] (0xc0006e3d90) (0xc001bfe640) Stream removed, broadcasting: 1 I0604 11:09:26.377463 6 log.go:172] (0xc0006e3d90) (0xc001ba8000) Stream removed, broadcasting: 3 I0604 11:09:26.377492 6 log.go:172] (0xc0006e3d90) (0xc0018570e0) Stream removed, broadcasting: 5 Jun 4 11:09:26.377: INFO: Found all expected endpoints: [netserver-0] Jun 4 11:09:26.381: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.23 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xdkxc PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:09:26.381: INFO: >>> kubeConfig: /root/.kube/config I0604 11:09:26.411976 6 log.go:172] (0xc0015342c0) (0xc001ba8320) Create stream I0604 11:09:26.412011 6 log.go:172] (0xc0015342c0) (0xc001ba8320) Stream added, broadcasting: 1 I0604 11:09:26.414672 6 log.go:172] (0xc0015342c0) Reply frame received for 1 I0604 11:09:26.414712 6 log.go:172] (0xc0015342c0) (0xc001b4a960) Create stream I0604 11:09:26.414729 6 log.go:172] (0xc0015342c0) (0xc001b4a960) Stream added, broadcasting: 3 I0604 11:09:26.415737 6 log.go:172] (0xc0015342c0) Reply frame received for 3 I0604 11:09:26.415771 6 log.go:172] (0xc0015342c0) (0xc001bfe6e0) Create stream I0604 11:09:26.415782 6 log.go:172] (0xc0015342c0) (0xc001bfe6e0) Stream added, broadcasting: 5 I0604 11:09:26.416726 6 log.go:172] (0xc0015342c0) Reply frame received for 5 I0604 11:09:27.498406 6 log.go:172] (0xc0015342c0) Data frame received for 3 I0604 11:09:27.498444 6 log.go:172] (0xc001b4a960) (3) Data frame handling I0604 11:09:27.498500 6 log.go:172] (0xc001b4a960) (3) Data frame sent I0604 11:09:27.498752 6 log.go:172] (0xc0015342c0) Data frame received for 5 I0604 11:09:27.498786 6 log.go:172] (0xc001bfe6e0) (5) Data frame handling I0604 11:09:27.498825 6 log.go:172] (0xc0015342c0) Data frame received for 3 I0604 11:09:27.498852 6 log.go:172] (0xc001b4a960) (3) Data frame handling I0604 11:09:27.500572 6 log.go:172] (0xc0015342c0) Data frame received for 1 I0604 11:09:27.500609 6 log.go:172] (0xc001ba8320) (1) Data frame handling I0604 11:09:27.500641 6 log.go:172] (0xc001ba8320) (1) Data frame sent I0604 11:09:27.500662 6 log.go:172] (0xc0015342c0) (0xc001ba8320) Stream removed, broadcasting: 1 I0604 11:09:27.500684 6 log.go:172] (0xc0015342c0) Go away received I0604 11:09:27.500969 6 log.go:172] (0xc0015342c0) (0xc001ba8320) Stream removed, broadcasting: 1 I0604 11:09:27.501003 6 log.go:172] (0xc0015342c0) (0xc001b4a960) Stream removed, broadcasting: 3 I0604 11:09:27.501028 6 log.go:172] (0xc0015342c0) (0xc001bfe6e0) Stream removed, broadcasting: 5 Jun 4 11:09:27.501: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:09:27.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xdkxc" for this suite. Jun 4 11:09:51.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:09:51.556: INFO: namespace: e2e-tests-pod-network-test-xdkxc, resource: bindings, ignored listing per whitelist Jun 4 11:09:51.604: INFO: namespace e2e-tests-pod-network-test-xdkxc deletion completed in 24.098616904s • [SLOW TEST:56.714 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:09:51.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 4 11:09:51.783: INFO: Waiting up to 5m0s for pod "pod-e96d2fad-a653-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-gdzmf" to be "success or failure" Jun 4 11:09:51.788: INFO: Pod "pod-e96d2fad-a653-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.198428ms Jun 4 11:09:53.824: INFO: Pod "pod-e96d2fad-a653-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041269938s Jun 4 11:09:55.828: INFO: Pod "pod-e96d2fad-a653-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045327294s STEP: Saw pod success Jun 4 11:09:55.828: INFO: Pod "pod-e96d2fad-a653-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:09:55.831: INFO: Trying to get logs from node hunter-worker pod pod-e96d2fad-a653-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:09:55.870: INFO: Waiting for pod pod-e96d2fad-a653-11ea-86dc-0242ac110018 to disappear Jun 4 11:09:55.878: INFO: Pod pod-e96d2fad-a653-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:09:55.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gdzmf" for this suite. Jun 4 11:10:01.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:10:01.964: INFO: namespace: e2e-tests-emptydir-gdzmf, resource: bindings, ignored listing per whitelist Jun 4 11:10:01.994: INFO: namespace e2e-tests-emptydir-gdzmf deletion completed in 6.112517143s • [SLOW TEST:10.389 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:10:01.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jun 4 11:10:02.105: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 4 11:10:02.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:02.477: INFO: stderr: "" Jun 4 11:10:02.477: INFO: stdout: "service/redis-slave created\n" Jun 4 11:10:02.478: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 4 11:10:02.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:02.787: INFO: stderr: "" Jun 4 11:10:02.787: INFO: stdout: "service/redis-master created\n" Jun 4 11:10:02.787: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 4 11:10:02.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:03.122: INFO: stderr: "" Jun 4 11:10:03.122: INFO: stdout: "service/frontend created\n" Jun 4 11:10:03.122: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 4 11:10:03.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:03.361: INFO: stderr: "" Jun 4 11:10:03.361: INFO: stdout: "deployment.extensions/frontend created\n" Jun 4 11:10:03.361: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 4 11:10:03.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:03.635: INFO: stderr: "" Jun 4 11:10:03.635: INFO: stdout: "deployment.extensions/redis-master created\n" Jun 4 11:10:03.635: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 4 11:10:03.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:03.934: INFO: stderr: "" Jun 4 11:10:03.934: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jun 4 11:10:03.934: INFO: Waiting for all frontend pods to be Running. Jun 4 11:10:13.985: INFO: Waiting for frontend to serve content. Jun 4 11:10:14.020: INFO: Trying to add a new entry to the guestbook. Jun 4 11:10:14.089: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 4 11:10:14.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:14.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:10:14.277: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 4 11:10:14.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:14.419: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:10:14.419: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 4 11:10:14.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:14.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:10:14.575: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 4 11:10:14.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:14.689: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:10:14.689: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 4 11:10:14.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:14.784: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:10:14.784: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 4 11:10:14.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-25pb5' Jun 4 11:10:15.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:10:15.186: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:10:15.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-25pb5" for this suite. Jun 4 11:10:53.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:10:53.405: INFO: namespace: e2e-tests-kubectl-25pb5, resource: bindings, ignored listing per whitelist Jun 4 11:10:53.466: INFO: namespace e2e-tests-kubectl-25pb5 deletion completed in 38.223769242s • [SLOW TEST:51.472 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:10:53.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 4 11:11:01.663: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:01.671: INFO: Pod pod-with-prestop-http-hook still exists Jun 4 11:11:03.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:03.700: INFO: Pod pod-with-prestop-http-hook still exists Jun 4 11:11:05.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:05.677: INFO: Pod pod-with-prestop-http-hook still exists Jun 4 11:11:07.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:07.676: INFO: Pod pod-with-prestop-http-hook still exists Jun 4 11:11:09.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:09.676: INFO: Pod pod-with-prestop-http-hook still exists Jun 4 11:11:11.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:11.676: INFO: Pod pod-with-prestop-http-hook still exists Jun 4 11:11:13.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 4 11:11:13.676: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:11:13.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-f6zxn" for this suite. Jun 4 11:11:35.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:11:35.777: INFO: namespace: e2e-tests-container-lifecycle-hook-f6zxn, resource: bindings, ignored listing per whitelist Jun 4 11:11:35.785: INFO: namespace e2e-tests-container-lifecycle-hook-f6zxn deletion completed in 22.098268253s • [SLOW TEST:42.319 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:11:35.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 4 11:11:35.886: INFO: Waiting up to 5m0s for pod "downward-api-27852e38-a654-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-8bl9k" to be "success or failure" Jun 4 11:11:35.890: INFO: Pod "downward-api-27852e38-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041206ms Jun 4 11:11:37.966: INFO: Pod "downward-api-27852e38-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07971088s Jun 4 11:11:39.970: INFO: Pod "downward-api-27852e38-a654-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083753714s STEP: Saw pod success Jun 4 11:11:39.970: INFO: Pod "downward-api-27852e38-a654-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:11:39.972: INFO: Trying to get logs from node hunter-worker pod downward-api-27852e38-a654-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 11:11:40.020: INFO: Waiting for pod downward-api-27852e38-a654-11ea-86dc-0242ac110018 to disappear Jun 4 11:11:40.048: INFO: Pod downward-api-27852e38-a654-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:11:40.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8bl9k" for this suite. Jun 4 11:11:46.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:11:46.182: INFO: namespace: e2e-tests-downward-api-8bl9k, resource: bindings, ignored listing per whitelist Jun 4 11:11:46.199: INFO: namespace e2e-tests-downward-api-8bl9k deletion completed in 6.148559311s • [SLOW TEST:10.413 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:11:46.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2dc4ffa4-a654-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:11:46.483: INFO: Waiting up to 5m0s for pod "pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-596tg" to be "success or failure" Jun 4 11:11:46.505: INFO: Pod "pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.935938ms Jun 4 11:11:48.509: INFO: Pod "pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02559346s Jun 4 11:11:50.513: INFO: Pod "pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030144558s STEP: Saw pod success Jun 4 11:11:50.513: INFO: Pod "pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:11:50.516: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 11:11:50.587: INFO: Waiting for pod pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018 to disappear Jun 4 11:11:50.589: INFO: Pod pod-secrets-2dd75975-a654-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:11:50.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-596tg" for this suite. Jun 4 11:11:56.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:11:56.680: INFO: namespace: e2e-tests-secrets-596tg, resource: bindings, ignored listing per whitelist Jun 4 11:11:56.706: INFO: namespace e2e-tests-secrets-596tg deletion completed in 6.114334752s STEP: Destroying namespace "e2e-tests-secret-namespace-sqw8m" for this suite. Jun 4 11:12:02.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:12:02.802: INFO: namespace: e2e-tests-secret-namespace-sqw8m, resource: bindings, ignored listing per whitelist Jun 4 11:12:02.819: INFO: namespace e2e-tests-secret-namespace-sqw8m deletion completed in 6.112829635s • [SLOW TEST:16.620 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:12:02.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-37a634e8-a654-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:12:02.971: INFO: Waiting up to 5m0s for pod "pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-nzxv6" to be "success or failure" Jun 4 11:12:02.987: INFO: Pod "pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.760558ms Jun 4 11:12:04.991: INFO: Pod "pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019920541s Jun 4 11:12:06.995: INFO: Pod "pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023950663s STEP: Saw pod success Jun 4 11:12:06.995: INFO: Pod "pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:12:06.997: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 11:12:07.039: INFO: Waiting for pod pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018 to disappear Jun 4 11:12:07.096: INFO: Pod pod-configmaps-37ab1ce8-a654-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:12:07.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nzxv6" for this suite. Jun 4 11:12:13.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:12:13.167: INFO: namespace: e2e-tests-configmap-nzxv6, resource: bindings, ignored listing per whitelist Jun 4 11:12:13.210: INFO: namespace e2e-tests-configmap-nzxv6 deletion completed in 6.109384944s • [SLOW TEST:10.392 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:12:13.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:12:13.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-2qzvb" to be "success or failure" Jun 4 11:12:13.384: INFO: Pod "downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.060169ms Jun 4 11:12:15.387: INFO: Pod "downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035245671s Jun 4 11:12:17.391: INFO: Pod "downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038690968s STEP: Saw pod success Jun 4 11:12:17.391: INFO: Pod "downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:12:17.394: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:12:17.559: INFO: Waiting for pod downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018 to disappear Jun 4 11:12:17.570: INFO: Pod downwardapi-volume-3dd9eaea-a654-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:12:17.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2qzvb" for this suite. Jun 4 11:12:23.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:12:23.629: INFO: namespace: e2e-tests-projected-2qzvb, resource: bindings, ignored listing per whitelist Jun 4 11:12:23.675: INFO: namespace e2e-tests-projected-2qzvb deletion completed in 6.101928299s • [SLOW TEST:10.464 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:12:23.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jun 4 11:12:23.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 4 11:12:23.905: INFO: stderr: "" Jun 4 11:12:23.905: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:12:23.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5h498" for this suite. Jun 4 11:12:29.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:12:30.005: INFO: namespace: e2e-tests-kubectl-5h498, resource: bindings, ignored listing per whitelist Jun 4 11:12:30.046: INFO: namespace e2e-tests-kubectl-5h498 deletion completed in 6.13680106s • [SLOW TEST:6.370 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:12:30.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-47e183d3-a654-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-47e183d3-a654-11ea-86dc-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:12:36.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-95t2v" for this suite. Jun 4 11:12:58.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:12:58.322: INFO: namespace: e2e-tests-configmap-95t2v, resource: bindings, ignored listing per whitelist Jun 4 11:12:58.350: INFO: namespace e2e-tests-configmap-95t2v deletion completed in 22.102454685s • [SLOW TEST:28.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:12:58.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-5f4v STEP: Creating a pod to test atomic-volume-subpath Jun 4 11:12:58.476: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5f4v" in namespace "e2e-tests-subpath-wdtlh" to be "success or failure" Jun 4 11:12:58.479: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206682ms Jun 4 11:13:00.515: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039822064s Jun 4 11:13:02.541: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065462755s Jun 4 11:13:04.545: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=true. Elapsed: 6.069795549s Jun 4 11:13:06.550: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 8.074051815s Jun 4 11:13:08.554: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 10.078525354s Jun 4 11:13:10.558: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 12.082748116s Jun 4 11:13:12.563: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 14.086925265s Jun 4 11:13:14.567: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 16.091073664s Jun 4 11:13:16.583: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 18.107225362s Jun 4 11:13:18.587: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 20.111837923s Jun 4 11:13:20.592: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 22.116321535s Jun 4 11:13:22.596: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Running", Reason="", readiness=false. Elapsed: 24.120601279s Jun 4 11:13:24.649: INFO: Pod "pod-subpath-test-configmap-5f4v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.173327439s STEP: Saw pod success Jun 4 11:13:24.649: INFO: Pod "pod-subpath-test-configmap-5f4v" satisfied condition "success or failure" Jun 4 11:13:24.652: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-5f4v container test-container-subpath-configmap-5f4v: STEP: delete the pod Jun 4 11:13:24.707: INFO: Waiting for pod pod-subpath-test-configmap-5f4v to disappear Jun 4 11:13:24.711: INFO: Pod pod-subpath-test-configmap-5f4v no longer exists STEP: Deleting pod pod-subpath-test-configmap-5f4v Jun 4 11:13:24.711: INFO: Deleting pod "pod-subpath-test-configmap-5f4v" in namespace "e2e-tests-subpath-wdtlh" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:13:24.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-wdtlh" for this suite. Jun 4 11:13:30.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:13:30.829: INFO: namespace: e2e-tests-subpath-wdtlh, resource: bindings, ignored listing per whitelist Jun 4 11:13:30.841: INFO: namespace e2e-tests-subpath-wdtlh deletion completed in 6.124780263s • [SLOW TEST:32.491 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:13:30.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:13:34.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hq6sh" for this suite. Jun 4 11:14:24.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:14:25.029: INFO: namespace: e2e-tests-kubelet-test-hq6sh, resource: bindings, ignored listing per whitelist Jun 4 11:14:25.089: INFO: namespace e2e-tests-kubelet-test-hq6sh deletion completed in 50.11979254s • [SLOW TEST:54.247 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:14:25.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-8c737df7-a654-11ea-86dc-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-8c737ea0-a654-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8c737df7-a654-11ea-86dc-0242ac110018 STEP: Updating configmap cm-test-opt-upd-8c737ea0-a654-11ea-86dc-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-8c737eeb-a654-11ea-86dc-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:14:33.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fh9hq" for this suite. Jun 4 11:14:57.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:14:57.414: INFO: namespace: e2e-tests-configmap-fh9hq, resource: bindings, ignored listing per whitelist Jun 4 11:14:57.466: INFO: namespace e2e-tests-configmap-fh9hq deletion completed in 24.123059138s • [SLOW TEST:32.377 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:14:57.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 11:14:57.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jun 4 11:14:57.619: INFO: stderr: "" Jun 4 11:14:57.619: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 4 11:14:57.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ckc2x' Jun 4 11:15:00.067: INFO: stderr: "" Jun 4 11:15:00.067: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 4 11:15:00.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ckc2x' Jun 4 11:15:00.397: INFO: stderr: "" Jun 4 11:15:00.397: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 4 11:15:01.402: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:15:01.402: INFO: Found 0 / 1 Jun 4 11:15:02.401: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:15:02.401: INFO: Found 0 / 1 Jun 4 11:15:03.402: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:15:03.402: INFO: Found 1 / 1 Jun 4 11:15:03.402: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 4 11:15:03.406: INFO: Selector matched 1 pods for map[app:redis] Jun 4 11:15:03.407: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 4 11:15:03.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-fhpz6 --namespace=e2e-tests-kubectl-ckc2x' Jun 4 11:15:03.533: INFO: stderr: "" Jun 4 11:15:03.533: INFO: stdout: "Name: redis-master-fhpz6\nNamespace: e2e-tests-kubectl-ckc2x\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.3\nStart Time: Thu, 04 Jun 2020 11:15:00 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.33\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://4931aecbdbdcc4624cd25874e7ab5f7e7776fda1215ece0f14ec2e362c677309\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 04 Jun 2020 11:15:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlb5p (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dlb5p:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dlb5p\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned e2e-tests-kubectl-ckc2x/redis-master-fhpz6 to hunter-worker\n Normal Pulled 2s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Jun 4 11:15:03.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-ckc2x' Jun 4 11:15:03.674: INFO: stderr: "" Jun 4 11:15:03.675: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ckc2x\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-fhpz6\n" Jun 4 11:15:03.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-ckc2x' Jun 4 11:15:03.792: INFO: stderr: "" Jun 4 11:15:03.792: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ckc2x\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.97.1.43\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.33:6379\nSession Affinity: None\nEvents: \n" Jun 4 11:15:03.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jun 4 11:15:03.928: INFO: stderr: "" Jun 4 11:15:03.928: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 04 Jun 2020 11:15:01 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 04 Jun 2020 11:15:01 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 04 Jun 2020 11:15:01 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 04 Jun 2020 11:15:01 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 80d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 80d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 80d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 80d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 80d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 4 11:15:03.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-ckc2x' Jun 4 11:15:04.044: INFO: stderr: "" Jun 4 11:15:04.044: INFO: stdout: "Name: e2e-tests-kubectl-ckc2x\nLabels: e2e-framework=kubectl\n e2e-run=b3f299bb-a650-11ea-86dc-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:15:04.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ckc2x" for this suite. Jun 4 11:15:26.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:15:26.123: INFO: namespace: e2e-tests-kubectl-ckc2x, resource: bindings, ignored listing per whitelist Jun 4 11:15:26.138: INFO: namespace e2e-tests-kubectl-ckc2x deletion completed in 22.090570444s • [SLOW TEST:28.672 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:15:26.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jun 4 11:15:26.258: INFO: Waiting up to 5m0s for pod "var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018" in namespace "e2e-tests-var-expansion-6blgc" to be "success or failure" Jun 4 11:15:26.262: INFO: Pod "var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920375ms Jun 4 11:15:28.657: INFO: Pod "var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399335186s Jun 4 11:15:30.661: INFO: Pod "var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.403418757s STEP: Saw pod success Jun 4 11:15:30.661: INFO: Pod "var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:15:30.664: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 11:15:30.694: INFO: Waiting for pod var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018 to disappear Jun 4 11:15:30.896: INFO: Pod var-expansion-b0d5400b-a654-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:15:30.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-6blgc" for this suite. Jun 4 11:15:37.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:15:37.123: INFO: namespace: e2e-tests-var-expansion-6blgc, resource: bindings, ignored listing per whitelist Jun 4 11:15:37.150: INFO: namespace e2e-tests-var-expansion-6blgc deletion completed in 6.250818417s • [SLOW TEST:11.012 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:15:37.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 4 11:15:37.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:37.524: INFO: stderr: "" Jun 4 11:15:37.524: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 11:15:37.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:37.651: INFO: stderr: "" Jun 4 11:15:37.651: INFO: stdout: "update-demo-nautilus-nhvvx update-demo-nautilus-prdv4 " Jun 4 11:15:37.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:37.764: INFO: stderr: "" Jun 4 11:15:37.764: INFO: stdout: "" Jun 4 11:15:37.764: INFO: update-demo-nautilus-nhvvx is created but not running Jun 4 11:15:42.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:42.870: INFO: stderr: "" Jun 4 11:15:42.870: INFO: stdout: "update-demo-nautilus-nhvvx update-demo-nautilus-prdv4 " Jun 4 11:15:42.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:42.977: INFO: stderr: "" Jun 4 11:15:42.978: INFO: stdout: "true" Jun 4 11:15:42.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:43.092: INFO: stderr: "" Jun 4 11:15:43.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:15:43.092: INFO: validating pod update-demo-nautilus-nhvvx Jun 4 11:15:43.097: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:15:43.097: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:15:43.097: INFO: update-demo-nautilus-nhvvx is verified up and running Jun 4 11:15:43.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prdv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:43.203: INFO: stderr: "" Jun 4 11:15:43.203: INFO: stdout: "true" Jun 4 11:15:43.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prdv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:43.306: INFO: stderr: "" Jun 4 11:15:43.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:15:43.306: INFO: validating pod update-demo-nautilus-prdv4 Jun 4 11:15:43.310: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:15:43.310: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:15:43.310: INFO: update-demo-nautilus-prdv4 is verified up and running STEP: scaling down the replication controller Jun 4 11:15:43.312: INFO: scanned /root for discovery docs: Jun 4 11:15:43.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:44.467: INFO: stderr: "" Jun 4 11:15:44.467: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 11:15:44.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:44.564: INFO: stderr: "" Jun 4 11:15:44.564: INFO: stdout: "update-demo-nautilus-nhvvx update-demo-nautilus-prdv4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 4 11:15:49.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:49.663: INFO: stderr: "" Jun 4 11:15:49.663: INFO: stdout: "update-demo-nautilus-nhvvx " Jun 4 11:15:49.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:49.759: INFO: stderr: "" Jun 4 11:15:49.759: INFO: stdout: "true" Jun 4 11:15:49.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:49.850: INFO: stderr: "" Jun 4 11:15:49.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:15:49.850: INFO: validating pod update-demo-nautilus-nhvvx Jun 4 11:15:49.854: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:15:49.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:15:49.854: INFO: update-demo-nautilus-nhvvx is verified up and running STEP: scaling up the replication controller Jun 4 11:15:49.855: INFO: scanned /root for discovery docs: Jun 4 11:15:49.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:50.993: INFO: stderr: "" Jun 4 11:15:50.993: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 11:15:50.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:51.098: INFO: stderr: "" Jun 4 11:15:51.098: INFO: stdout: "update-demo-nautilus-nhvvx update-demo-nautilus-sqlgl " Jun 4 11:15:51.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:51.193: INFO: stderr: "" Jun 4 11:15:51.193: INFO: stdout: "true" Jun 4 11:15:51.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:51.285: INFO: stderr: "" Jun 4 11:15:51.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:15:51.285: INFO: validating pod update-demo-nautilus-nhvvx Jun 4 11:15:51.288: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:15:51.288: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:15:51.288: INFO: update-demo-nautilus-nhvvx is verified up and running Jun 4 11:15:51.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqlgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:51.385: INFO: stderr: "" Jun 4 11:15:51.385: INFO: stdout: "" Jun 4 11:15:51.385: INFO: update-demo-nautilus-sqlgl is created but not running Jun 4 11:15:56.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:56.486: INFO: stderr: "" Jun 4 11:15:56.486: INFO: stdout: "update-demo-nautilus-nhvvx update-demo-nautilus-sqlgl " Jun 4 11:15:56.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:56.616: INFO: stderr: "" Jun 4 11:15:56.616: INFO: stdout: "true" Jun 4 11:15:56.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nhvvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:56.715: INFO: stderr: "" Jun 4 11:15:56.715: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:15:56.715: INFO: validating pod update-demo-nautilus-nhvvx Jun 4 11:15:56.719: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:15:56.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:15:56.719: INFO: update-demo-nautilus-nhvvx is verified up and running Jun 4 11:15:56.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqlgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:56.842: INFO: stderr: "" Jun 4 11:15:56.842: INFO: stdout: "true" Jun 4 11:15:56.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqlgl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:56.940: INFO: stderr: "" Jun 4 11:15:56.940: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 11:15:56.940: INFO: validating pod update-demo-nautilus-sqlgl Jun 4 11:15:56.944: INFO: got data: { "image": "nautilus.jpg" } Jun 4 11:15:56.944: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 11:15:56.944: INFO: update-demo-nautilus-sqlgl is verified up and running STEP: using delete to clean up resources Jun 4 11:15:56.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:57.069: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:15:57.069: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 4 11:15:57.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-8zkcc' Jun 4 11:15:57.176: INFO: stderr: "No resources found.\n" Jun 4 11:15:57.176: INFO: stdout: "" Jun 4 11:15:57.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-8zkcc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 4 11:15:57.273: INFO: stderr: "" Jun 4 11:15:57.273: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:15:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8zkcc" for this suite. Jun 4 11:16:03.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:16:03.525: INFO: namespace: e2e-tests-kubectl-8zkcc, resource: bindings, ignored listing per whitelist Jun 4 11:16:03.574: INFO: namespace e2e-tests-kubectl-8zkcc deletion completed in 6.29770491s • [SLOW TEST:26.424 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:16:03.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0604 11:16:34.216424 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 11:16:34.216: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:16:34.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-95sxl" for this suite. Jun 4 11:16:40.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:16:40.303: INFO: namespace: e2e-tests-gc-95sxl, resource: bindings, ignored listing per whitelist Jun 4 11:16:40.348: INFO: namespace e2e-tests-gc-95sxl deletion completed in 6.128743144s • [SLOW TEST:36.774 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:16:40.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:16:44.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qk2dh" for this suite. Jun 4 11:17:34.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:17:34.870: INFO: namespace: e2e-tests-kubelet-test-qk2dh, resource: bindings, ignored listing per whitelist Jun 4 11:17:34.889: INFO: namespace e2e-tests-kubelet-test-qk2dh deletion completed in 50.112890876s • [SLOW TEST:54.540 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:17:34.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 4 11:17:39.036: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fd95cc66-a654-11ea-86dc-0242ac110018,GenerateName:,Namespace:e2e-tests-events-rnssh,SelfLink:/api/v1/namespaces/e2e-tests-events-rnssh/pods/send-events-fd95cc66-a654-11ea-86dc-0242ac110018,UID:fd96619c-a654-11ea-99e8-0242ac110002,ResourceVersion:14169297,Generation:0,CreationTimestamp:2020-06-04 11:17:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 13802201,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-f7skf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f7skf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-f7skf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f32e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f32e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:17:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:17:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:17:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:17:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.38,StartTime:2020-06-04 11:17:35 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-04 11:17:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://da9c051e6b6d97f2a8f45634933b702121960948a3699a5cc8507f11992908ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 4 11:17:41.041: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 4 11:17:43.047: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:17:43.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-rnssh" for this suite. Jun 4 11:18:21.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:18:21.158: INFO: namespace: e2e-tests-events-rnssh, resource: bindings, ignored listing per whitelist Jun 4 11:18:21.176: INFO: namespace e2e-tests-events-rnssh deletion completed in 38.095910715s • [SLOW TEST:46.287 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:18:21.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 4 11:18:21.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 4 11:18:21.341: INFO: Waiting for terminating namespaces to be deleted... Jun 4 11:18:21.344: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 4 11:18:21.350: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 4 11:18:21.350: INFO: Container kube-proxy ready: true, restart count 0 Jun 4 11:18:21.350: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 11:18:21.350: INFO: Container kindnet-cni ready: true, restart count 0 Jun 4 11:18:21.350: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 4 11:18:21.350: INFO: Container coredns ready: true, restart count 0 Jun 4 11:18:21.350: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 4 11:18:21.356: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 4 11:18:21.356: INFO: Container coredns ready: true, restart count 0 Jun 4 11:18:21.356: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 11:18:21.356: INFO: Container kindnet-cni ready: true, restart count 0 Jun 4 11:18:21.356: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 4 11:18:21.356: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jun 4 11:18:21.428: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Jun 4 11:18:21.428: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Jun 4 11:18:21.428: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Jun 4 11:18:21.428: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Jun 4 11:18:21.428: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Jun 4 11:18:21.428: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-194012bc-a655-11ea-86dc-0242ac110018.1615532e44438617], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-66wmc/filler-pod-194012bc-a655-11ea-86dc-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-194012bc-a655-11ea-86dc-0242ac110018.1615532e9143f297], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-194012bc-a655-11ea-86dc-0242ac110018.1615532ee309ca48], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-194012bc-a655-11ea-86dc-0242ac110018.1615532ef7174c5a], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-194cc5be-a655-11ea-86dc-0242ac110018.1615532e45aef948], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-66wmc/filler-pod-194cc5be-a655-11ea-86dc-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-194cc5be-a655-11ea-86dc-0242ac110018.1615532ec84fc4c4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-194cc5be-a655-11ea-86dc-0242ac110018.1615532f09574b8a], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-194cc5be-a655-11ea-86dc-0242ac110018.1615532f190069fe], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1615532fac6f290f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:18:28.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-66wmc" for this suite. Jun 4 11:18:34.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:18:34.711: INFO: namespace: e2e-tests-sched-pred-66wmc, resource: bindings, ignored listing per whitelist Jun 4 11:18:34.753: INFO: namespace e2e-tests-sched-pred-66wmc deletion completed in 6.084441615s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.577 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:18:34.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 4 11:18:35.131: INFO: PodSpec: initContainers in spec.initContainers Jun 4 11:19:23.496: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-216af3ee-a655-11ea-86dc-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-4d4st", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-4d4st/pods/pod-init-216af3ee-a655-11ea-86dc-0242ac110018", UID:"216c777e-a655-11ea-99e8-0242ac110002", ResourceVersion:"14169592", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726866315, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"130994309"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zsmsj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001cba640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zsmsj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zsmsj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zsmsj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001837158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001af4f60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018371e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001837200)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001837208), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00183720c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866315, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866315, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866315, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866315, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.40", StartTime:(*v1.Time)(0xc000d10c80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000d10cc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019b3ce0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a41514d8de695ddd81f599c03642c481af81e0e4c1cbdb91fa8df0d793b3d427"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d10ce0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d10ca0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:19:23.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-4d4st" for this suite. Jun 4 11:19:45.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:19:45.856: INFO: namespace: e2e-tests-init-container-4d4st, resource: bindings, ignored listing per whitelist Jun 4 11:19:45.856: INFO: namespace e2e-tests-init-container-4d4st deletion completed in 22.17131131s • [SLOW TEST:71.103 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:19:45.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-7nzqz Jun 4 11:19:49.998: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-7nzqz STEP: checking the pod's current state and verifying that restartCount is present Jun 4 11:19:50.002: INFO: Initial restart count of pod liveness-exec is 0 Jun 4 11:20:40.112: INFO: Restart count of pod e2e-tests-container-probe-7nzqz/liveness-exec is now 1 (50.110070247s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:20:40.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7nzqz" for this suite. Jun 4 11:20:46.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:20:46.208: INFO: namespace: e2e-tests-container-probe-7nzqz, resource: bindings, ignored listing per whitelist Jun 4 11:20:46.239: INFO: namespace e2e-tests-container-probe-7nzqz deletion completed in 6.100849902s • [SLOW TEST:60.382 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:20:46.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9g7c6 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-9g7c6 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-9g7c6 Jun 4 11:20:46.389: INFO: Found 0 stateful pods, waiting for 1 Jun 4 11:20:56.395: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 4 11:20:56.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 11:20:56.655: INFO: stderr: "I0604 11:20:56.535313 1988 log.go:172] (0xc00014c630) (0xc00073a640) Create stream\nI0604 11:20:56.535372 1988 log.go:172] (0xc00014c630) (0xc00073a640) Stream added, broadcasting: 1\nI0604 11:20:56.538280 1988 log.go:172] (0xc00014c630) Reply frame received for 1\nI0604 11:20:56.538354 1988 log.go:172] (0xc00014c630) (0xc000484c80) Create stream\nI0604 11:20:56.538385 1988 log.go:172] (0xc00014c630) (0xc000484c80) Stream added, broadcasting: 3\nI0604 11:20:56.539397 1988 log.go:172] (0xc00014c630) Reply frame received for 3\nI0604 11:20:56.539429 1988 log.go:172] (0xc00014c630) (0xc000518000) Create stream\nI0604 11:20:56.539436 1988 log.go:172] (0xc00014c630) (0xc000518000) Stream added, broadcasting: 5\nI0604 11:20:56.540405 1988 log.go:172] (0xc00014c630) Reply frame received for 5\nI0604 11:20:56.647930 1988 log.go:172] (0xc00014c630) Data frame received for 3\nI0604 11:20:56.647974 1988 log.go:172] (0xc000484c80) (3) Data frame handling\nI0604 11:20:56.648018 1988 log.go:172] (0xc000484c80) (3) Data frame sent\nI0604 11:20:56.648043 1988 log.go:172] (0xc00014c630) Data frame received for 3\nI0604 11:20:56.648066 1988 log.go:172] (0xc000484c80) (3) Data frame handling\nI0604 11:20:56.648204 1988 log.go:172] (0xc00014c630) Data frame received for 5\nI0604 11:20:56.648299 1988 log.go:172] (0xc000518000) (5) Data frame handling\nI0604 11:20:56.650665 1988 log.go:172] (0xc00014c630) Data frame received for 1\nI0604 11:20:56.650693 1988 log.go:172] (0xc00073a640) (1) Data frame handling\nI0604 11:20:56.650714 1988 log.go:172] (0xc00073a640) (1) Data frame sent\nI0604 11:20:56.650756 1988 log.go:172] (0xc00014c630) (0xc00073a640) Stream removed, broadcasting: 1\nI0604 11:20:56.650780 1988 log.go:172] (0xc00014c630) Go away received\nI0604 11:20:56.650921 1988 log.go:172] (0xc00014c630) (0xc00073a640) Stream removed, broadcasting: 1\nI0604 11:20:56.650939 1988 log.go:172] (0xc00014c630) (0xc000484c80) Stream removed, broadcasting: 3\nI0604 11:20:56.650946 1988 log.go:172] (0xc00014c630) (0xc000518000) Stream removed, broadcasting: 5\n" Jun 4 11:20:56.655: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 11:20:56.655: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 11:20:56.692: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 4 11:21:06.697: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 4 11:21:06.697: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 11:21:06.720: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:06.720: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC }] Jun 4 11:21:06.720: INFO: Jun 4 11:21:06.720: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 4 11:21:07.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987974308s Jun 4 11:21:08.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983046989s Jun 4 11:21:10.002: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.747250461s Jun 4 11:21:11.007: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.705970126s Jun 4 11:21:12.012: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.701071745s Jun 4 11:21:13.018: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.696185351s Jun 4 11:21:14.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.689604135s Jun 4 11:21:15.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.684291292s Jun 4 11:21:16.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 675.64638ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-9g7c6 Jun 4 11:21:17.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:21:17.282: INFO: stderr: "I0604 11:21:17.165735 2011 log.go:172] (0xc000708420) (0xc00074e640) Create stream\nI0604 11:21:17.165813 2011 log.go:172] (0xc000708420) (0xc00074e640) Stream added, broadcasting: 1\nI0604 11:21:17.168408 2011 log.go:172] (0xc000708420) Reply frame received for 1\nI0604 11:21:17.168485 2011 log.go:172] (0xc000708420) (0xc000450c80) Create stream\nI0604 11:21:17.168511 2011 log.go:172] (0xc000708420) (0xc000450c80) Stream added, broadcasting: 3\nI0604 11:21:17.169905 2011 log.go:172] (0xc000708420) Reply frame received for 3\nI0604 11:21:17.169947 2011 log.go:172] (0xc000708420) (0xc000450dc0) Create stream\nI0604 11:21:17.169959 2011 log.go:172] (0xc000708420) (0xc000450dc0) Stream added, broadcasting: 5\nI0604 11:21:17.171060 2011 log.go:172] (0xc000708420) Reply frame received for 5\nI0604 11:21:17.276602 2011 log.go:172] (0xc000708420) Data frame received for 5\nI0604 11:21:17.276629 2011 log.go:172] (0xc000450dc0) (5) Data frame handling\nI0604 11:21:17.276659 2011 log.go:172] (0xc000708420) Data frame received for 3\nI0604 11:21:17.276684 2011 log.go:172] (0xc000450c80) (3) Data frame handling\nI0604 11:21:17.276713 2011 log.go:172] (0xc000450c80) (3) Data frame sent\nI0604 11:21:17.276721 2011 log.go:172] (0xc000708420) Data frame received for 3\nI0604 11:21:17.276735 2011 log.go:172] (0xc000450c80) (3) Data frame handling\nI0604 11:21:17.277942 2011 log.go:172] (0xc000708420) Data frame received for 1\nI0604 11:21:17.277969 2011 log.go:172] (0xc00074e640) (1) Data frame handling\nI0604 11:21:17.277983 2011 log.go:172] (0xc00074e640) (1) Data frame sent\nI0604 11:21:17.277996 2011 log.go:172] (0xc000708420) (0xc00074e640) Stream removed, broadcasting: 1\nI0604 11:21:17.278015 2011 log.go:172] (0xc000708420) Go away received\nI0604 11:21:17.278187 2011 log.go:172] (0xc000708420) (0xc00074e640) Stream removed, broadcasting: 1\nI0604 11:21:17.278199 2011 log.go:172] (0xc000708420) (0xc000450c80) Stream removed, broadcasting: 3\nI0604 11:21:17.278207 2011 log.go:172] (0xc000708420) (0xc000450dc0) Stream removed, broadcasting: 5\n" Jun 4 11:21:17.282: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 11:21:17.282: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 11:21:17.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:21:17.582: INFO: stderr: "I0604 11:21:17.422902 2034 log.go:172] (0xc0007ea2c0) (0xc0006f0640) Create stream\nI0604 11:21:17.422982 2034 log.go:172] (0xc0007ea2c0) (0xc0006f0640) Stream added, broadcasting: 1\nI0604 11:21:17.426209 2034 log.go:172] (0xc0007ea2c0) Reply frame received for 1\nI0604 11:21:17.426260 2034 log.go:172] (0xc0007ea2c0) (0xc00066cd20) Create stream\nI0604 11:21:17.426275 2034 log.go:172] (0xc0007ea2c0) (0xc00066cd20) Stream added, broadcasting: 3\nI0604 11:21:17.427018 2034 log.go:172] (0xc0007ea2c0) Reply frame received for 3\nI0604 11:21:17.427054 2034 log.go:172] (0xc0007ea2c0) (0xc000318000) Create stream\nI0604 11:21:17.427067 2034 log.go:172] (0xc0007ea2c0) (0xc000318000) Stream added, broadcasting: 5\nI0604 11:21:17.427789 2034 log.go:172] (0xc0007ea2c0) Reply frame received for 5\nI0604 11:21:17.577781 2034 log.go:172] (0xc0007ea2c0) Data frame received for 5\nI0604 11:21:17.577813 2034 log.go:172] (0xc000318000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0604 11:21:17.577842 2034 log.go:172] (0xc0007ea2c0) Data frame received for 3\nI0604 11:21:17.577864 2034 log.go:172] (0xc00066cd20) (3) Data frame handling\nI0604 11:21:17.577872 2034 log.go:172] (0xc00066cd20) (3) Data frame sent\nI0604 11:21:17.577882 2034 log.go:172] (0xc000318000) (5) Data frame sent\nI0604 11:21:17.577905 2034 log.go:172] (0xc0007ea2c0) Data frame received for 5\nI0604 11:21:17.577915 2034 log.go:172] (0xc000318000) (5) Data frame handling\nI0604 11:21:17.577934 2034 log.go:172] (0xc0007ea2c0) Data frame received for 3\nI0604 11:21:17.577942 2034 log.go:172] (0xc00066cd20) (3) Data frame handling\nI0604 11:21:17.579070 2034 log.go:172] (0xc0007ea2c0) Data frame received for 1\nI0604 11:21:17.579088 2034 log.go:172] (0xc0006f0640) (1) Data frame handling\nI0604 11:21:17.579100 2034 log.go:172] (0xc0006f0640) (1) Data frame sent\nI0604 11:21:17.579111 2034 log.go:172] (0xc0007ea2c0) (0xc0006f0640) Stream removed, broadcasting: 1\nI0604 11:21:17.579146 2034 log.go:172] (0xc0007ea2c0) Go away received\nI0604 11:21:17.579306 2034 log.go:172] (0xc0007ea2c0) (0xc0006f0640) Stream removed, broadcasting: 1\nI0604 11:21:17.579323 2034 log.go:172] (0xc0007ea2c0) (0xc00066cd20) Stream removed, broadcasting: 3\nI0604 11:21:17.579332 2034 log.go:172] (0xc0007ea2c0) (0xc000318000) Stream removed, broadcasting: 5\n" Jun 4 11:21:17.583: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 11:21:17.583: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 11:21:17.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:21:17.782: INFO: stderr: "I0604 11:21:17.717334 2057 log.go:172] (0xc00082a2c0) (0xc000599360) Create stream\nI0604 11:21:17.717389 2057 log.go:172] (0xc00082a2c0) (0xc000599360) Stream added, broadcasting: 1\nI0604 11:21:17.718865 2057 log.go:172] (0xc00082a2c0) Reply frame received for 1\nI0604 11:21:17.718903 2057 log.go:172] (0xc00082a2c0) (0xc00079e000) Create stream\nI0604 11:21:17.718912 2057 log.go:172] (0xc00082a2c0) (0xc00079e000) Stream added, broadcasting: 3\nI0604 11:21:17.719531 2057 log.go:172] (0xc00082a2c0) Reply frame received for 3\nI0604 11:21:17.719562 2057 log.go:172] (0xc00082a2c0) (0xc000402000) Create stream\nI0604 11:21:17.719573 2057 log.go:172] (0xc00082a2c0) (0xc000402000) Stream added, broadcasting: 5\nI0604 11:21:17.720141 2057 log.go:172] (0xc00082a2c0) Reply frame received for 5\nI0604 11:21:17.776787 2057 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0604 11:21:17.776815 2057 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0604 11:21:17.776831 2057 log.go:172] (0xc00079e000) (3) Data frame handling\nI0604 11:21:17.776841 2057 log.go:172] (0xc00079e000) (3) Data frame sent\nI0604 11:21:17.776849 2057 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0604 11:21:17.776860 2057 log.go:172] (0xc00079e000) (3) Data frame handling\nI0604 11:21:17.776876 2057 log.go:172] (0xc000402000) (5) Data frame handling\nI0604 11:21:17.776882 2057 log.go:172] (0xc000402000) (5) Data frame sent\nI0604 11:21:17.776886 2057 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0604 11:21:17.776890 2057 log.go:172] (0xc000402000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0604 11:21:17.778318 2057 log.go:172] (0xc00082a2c0) Data frame received for 1\nI0604 11:21:17.778332 2057 log.go:172] (0xc000599360) (1) Data frame handling\nI0604 11:21:17.778339 2057 log.go:172] (0xc000599360) (1) Data frame sent\nI0604 11:21:17.778347 2057 log.go:172] (0xc00082a2c0) (0xc000599360) Stream removed, broadcasting: 1\nI0604 11:21:17.778386 2057 log.go:172] (0xc00082a2c0) Go away received\nI0604 11:21:17.778466 2057 log.go:172] (0xc00082a2c0) (0xc000599360) Stream removed, broadcasting: 1\nI0604 11:21:17.778477 2057 log.go:172] (0xc00082a2c0) (0xc00079e000) Stream removed, broadcasting: 3\nI0604 11:21:17.778482 2057 log.go:172] (0xc00082a2c0) (0xc000402000) Stream removed, broadcasting: 5\n" Jun 4 11:21:17.782: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 11:21:17.782: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 11:21:17.794: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:21:17.794: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 4 11:21:17.794: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 4 11:21:17.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 11:21:18.001: INFO: stderr: "I0604 11:21:17.922919 2079 log.go:172] (0xc00013a630) (0xc000736640) Create stream\nI0604 11:21:17.922985 2079 log.go:172] (0xc00013a630) (0xc000736640) Stream added, broadcasting: 1\nI0604 11:21:17.926342 2079 log.go:172] (0xc00013a630) Reply frame received for 1\nI0604 11:21:17.926388 2079 log.go:172] (0xc00013a630) (0xc0007366e0) Create stream\nI0604 11:21:17.926417 2079 log.go:172] (0xc00013a630) (0xc0007366e0) Stream added, broadcasting: 3\nI0604 11:21:17.927584 2079 log.go:172] (0xc00013a630) Reply frame received for 3\nI0604 11:21:17.927621 2079 log.go:172] (0xc00013a630) (0xc000736780) Create stream\nI0604 11:21:17.927633 2079 log.go:172] (0xc00013a630) (0xc000736780) Stream added, broadcasting: 5\nI0604 11:21:17.928512 2079 log.go:172] (0xc00013a630) Reply frame received for 5\nI0604 11:21:17.994395 2079 log.go:172] (0xc00013a630) Data frame received for 5\nI0604 11:21:17.994454 2079 log.go:172] (0xc00013a630) Data frame received for 3\nI0604 11:21:17.994579 2079 log.go:172] (0xc000736780) (5) Data frame handling\nI0604 11:21:17.994718 2079 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0604 11:21:17.994746 2079 log.go:172] (0xc0007366e0) (3) Data frame sent\nI0604 11:21:17.994757 2079 log.go:172] (0xc00013a630) Data frame received for 3\nI0604 11:21:17.994825 2079 log.go:172] (0xc0007366e0) (3) Data frame handling\nI0604 11:21:17.996173 2079 log.go:172] (0xc00013a630) Data frame received for 1\nI0604 11:21:17.996198 2079 log.go:172] (0xc000736640) (1) Data frame handling\nI0604 11:21:17.996215 2079 log.go:172] (0xc000736640) (1) Data frame sent\nI0604 11:21:17.996397 2079 log.go:172] (0xc00013a630) (0xc000736640) Stream removed, broadcasting: 1\nI0604 11:21:17.996436 2079 log.go:172] (0xc00013a630) Go away received\nI0604 11:21:17.996646 2079 log.go:172] (0xc00013a630) (0xc000736640) Stream removed, broadcasting: 1\nI0604 11:21:17.996669 2079 log.go:172] (0xc00013a630) (0xc0007366e0) Stream removed, broadcasting: 3\nI0604 11:21:17.996681 2079 log.go:172] (0xc00013a630) (0xc000736780) Stream removed, broadcasting: 5\n" Jun 4 11:21:18.001: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 11:21:18.002: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 11:21:18.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 11:21:18.254: INFO: stderr: "I0604 11:21:18.133705 2102 log.go:172] (0xc00014c840) (0xc000734640) Create stream\nI0604 11:21:18.133766 2102 log.go:172] (0xc00014c840) (0xc000734640) Stream added, broadcasting: 1\nI0604 11:21:18.136545 2102 log.go:172] (0xc00014c840) Reply frame received for 1\nI0604 11:21:18.136577 2102 log.go:172] (0xc00014c840) (0xc0007346e0) Create stream\nI0604 11:21:18.136587 2102 log.go:172] (0xc00014c840) (0xc0007346e0) Stream added, broadcasting: 3\nI0604 11:21:18.137655 2102 log.go:172] (0xc00014c840) Reply frame received for 3\nI0604 11:21:18.137695 2102 log.go:172] (0xc00014c840) (0xc0007e8dc0) Create stream\nI0604 11:21:18.137708 2102 log.go:172] (0xc00014c840) (0xc0007e8dc0) Stream added, broadcasting: 5\nI0604 11:21:18.138769 2102 log.go:172] (0xc00014c840) Reply frame received for 5\nI0604 11:21:18.244695 2102 log.go:172] (0xc00014c840) Data frame received for 3\nI0604 11:21:18.244740 2102 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0604 11:21:18.244778 2102 log.go:172] (0xc0007346e0) (3) Data frame sent\nI0604 11:21:18.244801 2102 log.go:172] (0xc00014c840) Data frame received for 3\nI0604 11:21:18.244821 2102 log.go:172] (0xc0007346e0) (3) Data frame handling\nI0604 11:21:18.245033 2102 log.go:172] (0xc00014c840) Data frame received for 5\nI0604 11:21:18.245052 2102 log.go:172] (0xc0007e8dc0) (5) Data frame handling\nI0604 11:21:18.247235 2102 log.go:172] (0xc00014c840) Data frame received for 1\nI0604 11:21:18.247266 2102 log.go:172] (0xc000734640) (1) Data frame handling\nI0604 11:21:18.247285 2102 log.go:172] (0xc000734640) (1) Data frame sent\nI0604 11:21:18.247303 2102 log.go:172] (0xc00014c840) (0xc000734640) Stream removed, broadcasting: 1\nI0604 11:21:18.247342 2102 log.go:172] (0xc00014c840) Go away received\nI0604 11:21:18.247573 2102 log.go:172] (0xc00014c840) (0xc000734640) Stream removed, broadcasting: 1\nI0604 11:21:18.247634 2102 log.go:172] (0xc00014c840) (0xc0007346e0) Stream removed, broadcasting: 3\nI0604 11:21:18.247673 2102 log.go:172] (0xc00014c840) (0xc0007e8dc0) Stream removed, broadcasting: 5\n" Jun 4 11:21:18.254: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 11:21:18.254: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 11:21:18.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 11:21:18.498: INFO: stderr: "I0604 11:21:18.400485 2124 log.go:172] (0xc0008562c0) (0xc00073c640) Create stream\nI0604 11:21:18.400555 2124 log.go:172] (0xc0008562c0) (0xc00073c640) Stream added, broadcasting: 1\nI0604 11:21:18.403445 2124 log.go:172] (0xc0008562c0) Reply frame received for 1\nI0604 11:21:18.403477 2124 log.go:172] (0xc0008562c0) (0xc000602d20) Create stream\nI0604 11:21:18.403487 2124 log.go:172] (0xc0008562c0) (0xc000602d20) Stream added, broadcasting: 3\nI0604 11:21:18.404261 2124 log.go:172] (0xc0008562c0) Reply frame received for 3\nI0604 11:21:18.404316 2124 log.go:172] (0xc0008562c0) (0xc00073c6e0) Create stream\nI0604 11:21:18.404342 2124 log.go:172] (0xc0008562c0) (0xc00073c6e0) Stream added, broadcasting: 5\nI0604 11:21:18.405415 2124 log.go:172] (0xc0008562c0) Reply frame received for 5\nI0604 11:21:18.490446 2124 log.go:172] (0xc0008562c0) Data frame received for 3\nI0604 11:21:18.490490 2124 log.go:172] (0xc000602d20) (3) Data frame handling\nI0604 11:21:18.490516 2124 log.go:172] (0xc000602d20) (3) Data frame sent\nI0604 11:21:18.490926 2124 log.go:172] (0xc0008562c0) Data frame received for 3\nI0604 11:21:18.490952 2124 log.go:172] (0xc000602d20) (3) Data frame handling\nI0604 11:21:18.490998 2124 log.go:172] (0xc0008562c0) Data frame received for 5\nI0604 11:21:18.491028 2124 log.go:172] (0xc00073c6e0) (5) Data frame handling\nI0604 11:21:18.492671 2124 log.go:172] (0xc0008562c0) Data frame received for 1\nI0604 11:21:18.492750 2124 log.go:172] (0xc00073c640) (1) Data frame handling\nI0604 11:21:18.492789 2124 log.go:172] (0xc00073c640) (1) Data frame sent\nI0604 11:21:18.492832 2124 log.go:172] (0xc0008562c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0604 11:21:18.492877 2124 log.go:172] (0xc0008562c0) Go away received\nI0604 11:21:18.493098 2124 log.go:172] (0xc0008562c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0604 11:21:18.493303 2124 log.go:172] (0xc0008562c0) (0xc000602d20) Stream removed, broadcasting: 3\nI0604 11:21:18.493324 2124 log.go:172] (0xc0008562c0) (0xc00073c6e0) Stream removed, broadcasting: 5\n" Jun 4 11:21:18.498: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 11:21:18.498: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 11:21:18.498: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 11:21:18.501: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 4 11:21:28.509: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 4 11:21:28.509: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 4 11:21:28.509: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 4 11:21:28.538: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:28.538: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC }] Jun 4 11:21:28.538: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:28.538: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:28.538: INFO: Jun 4 11:21:28.538: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 4 11:21:29.663: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:29.663: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC }] Jun 4 11:21:29.663: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:29.663: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:29.663: INFO: Jun 4 11:21:29.663: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 4 11:21:30.670: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:30.670: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC }] Jun 4 11:21:30.670: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:30.670: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:30.670: INFO: Jun 4 11:21:30.670: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 4 11:21:31.675: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:31.675: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:20:46 +0000 UTC }] Jun 4 11:21:31.675: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:31.675: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:31.675: INFO: Jun 4 11:21:31.675: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 4 11:21:32.680: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:32.680: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:32.680: INFO: Jun 4 11:21:32.681: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 4 11:21:33.685: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:33.685: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:33.686: INFO: Jun 4 11:21:33.686: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 4 11:21:34.691: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:34.691: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:34.691: INFO: Jun 4 11:21:34.691: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 4 11:21:35.695: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:35.695: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:35.695: INFO: Jun 4 11:21:35.695: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 4 11:21:36.700: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:36.700: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:36.700: INFO: Jun 4 11:21:36.700: INFO: StatefulSet ss has not reached scale 0, at 1 Jun 4 11:21:37.705: INFO: POD NODE PHASE GRACE CONDITIONS Jun 4 11:21:37.705: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:21:06 +0000 UTC }] Jun 4 11:21:37.706: INFO: Jun 4 11:21:37.706: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-9g7c6 Jun 4 11:21:38.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:21:38.849: INFO: rc: 1 Jun 4 11:21:38.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001123f20 exit status 1 true [0xc000bda400 0xc000bda418 0xc000bda430] [0xc000bda400 0xc000bda418 0xc000bda430] [0xc000bda410 0xc000bda428] [0x935700 0x935700] 0xc001e17e60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 4 11:21:48.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:21:48.948: INFO: rc: 1 Jun 4 11:21:48.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00177cf30 exit status 1 true [0xc001764ef0 0xc001764f08 0xc001764f38] [0xc001764ef0 0xc001764f08 0xc001764f38] [0xc001764f00 0xc001764f20] [0x935700 0x935700] 0xc001d132c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:21:58.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:21:59.046: INFO: rc: 1 Jun 4 11:21:59.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc000bd60f0 exit status 1 true [0xc000bda438 0xc000bda450 0xc000bda468] [0xc000bda438 0xc000bda450 0xc000bda468] [0xc000bda448 0xc000bda460] [0x935700 0x935700] 0xc001af4960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:22:09.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:22:09.149: INFO: rc: 1 Jun 4 11:22:09.150: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015781b0 exit status 1 true [0xc0017760c0 0xc001776100 0xc001776130] [0xc0017760c0 0xc001776100 0xc001776130] [0xc0017760f8 0xc001776110] [0x935700 0x935700] 0xc001b044e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:22:19.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:22:19.241: INFO: rc: 1 Jun 4 11:22:19.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001b22240 exit status 1 true [0xc0005d5440 0xc0005d5598 0xc0005d5640] [0xc0005d5440 0xc0005d5598 0xc0005d5640] [0xc0005d5518 0xc0005d55f0] [0x935700 0x935700] 0xc001e16c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:22:29.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:22:29.330: INFO: rc: 1 Jun 4 11:22:29.330: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001b22360 exit status 1 true [0xc0005d5660 0xc0005d5728 0xc0005d5800] [0xc0005d5660 0xc0005d5728 0xc0005d5800] [0xc0005d56d0 0xc0005d57e8] [0x935700 0x935700] 0xc001e173e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:22:39.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:22:39.426: INFO: rc: 1 Jun 4 11:22:39.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001b22540 exit status 1 true [0xc0005d5828 0xc0005d5858 0xc0005d5900] [0xc0005d5828 0xc0005d5858 0xc0005d5900] [0xc0005d5850 0xc0005d58b0] [0x935700 0x935700] 0xc001e177a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:22:49.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:22:49.518: INFO: rc: 1 Jun 4 11:22:49.518: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc1e0 exit status 1 true [0xc00000e150 0xc00016f3a8 0xc00016f4a0] [0xc00000e150 0xc00016f3a8 0xc00016f4a0] [0xc00016e000 0xc00016f408] [0x935700 0x935700] 0xc00118d5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:22:59.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:22:59.609: INFO: rc: 1 Jun 4 11:22:59.609: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc360 exit status 1 true [0xc00016f518 0xc00016faf0 0xc00016fd38] [0xc00016f518 0xc00016faf0 0xc00016fd38] [0xc00016f618 0xc00016fc90] [0x935700 0x935700] 0xc00171eae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:23:09.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:23:09.706: INFO: rc: 1 Jun 4 11:23:09.706: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001068150 exit status 1 true [0xc00177a000 0xc00177a018 0xc00177a030] [0xc00177a000 0xc00177a018 0xc00177a030] [0xc00177a010 0xc00177a028] [0x935700 0x935700] 0xc0019d0300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:23:19.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:23:19.790: INFO: rc: 1 Jun 4 11:23:19.790: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001fdc1e0 exit status 1 true [0xc00091a000 0xc00091a030 0xc00091a080] [0xc00091a000 0xc00091a030 0xc00091a080] [0xc00091a028 0xc00091a068] [0x935700 0x935700] 0xc0019e9d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:23:29.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:23:29.886: INFO: rc: 1 Jun 4 11:23:29.886: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc480 exit status 1 true [0xc00016fde8 0xc00016fe90 0xc000c76010] [0xc00016fde8 0xc00016fe90 0xc000c76010] [0xc00016fe78 0xc00016ffd8] [0x935700 0x935700] 0xc00171f380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:23:39.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:23:39.969: INFO: rc: 1 Jun 4 11:23:39.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc600 exit status 1 true [0xc000c76020 0xc000c76038 0xc000c76050] [0xc000c76020 0xc000c76038 0xc000c76050] [0xc000c76030 0xc000c76048] [0x935700 0x935700] 0xc00171fc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:23:49.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:23:50.072: INFO: rc: 1 Jun 4 11:23:50.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001fdc360 exit status 1 true [0xc00091a088 0xc00091a0c0 0xc00091a0e8] [0xc00091a088 0xc00091a0c0 0xc00091a0e8] [0xc00091a0a0 0xc00091a0e0] [0x935700 0x935700] 0xc001ba6600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:24:00.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:24:00.183: INFO: rc: 1 Jun 4 11:24:00.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc780 exit status 1 true [0xc000c76080 0xc000c760b8 0xc000c760f0] [0xc000c76080 0xc000c760b8 0xc000c760f0] [0xc000c760a0 0xc000c760d8] [0x935700 0x935700] 0xc001774840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:24:10.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:24:10.278: INFO: rc: 1 Jun 4 11:24:10.278: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001068180 exit status 1 true [0xc00016f3a8 0xc00016f4a0 0xc00016f618] [0xc00016f3a8 0xc00016f4a0 0xc00016f618] [0xc00016f408 0xc00016f5f0] [0x935700 0x935700] 0xc0019e9d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:24:20.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:24:20.376: INFO: rc: 1 Jun 4 11:24:20.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001b222a0 exit status 1 true [0xc00000e150 0xc0005d54d0 0xc0005d55a8] [0xc00000e150 0xc0005d54d0 0xc0005d55a8] [0xc0005d5440 0xc0005d5598] [0x935700 0x935700] 0xc00171e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:24:30.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:24:30.474: INFO: rc: 1 Jun 4 11:24:30.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0010682d0 exit status 1 true [0xc00016faf0 0xc00016fd38 0xc00016fe78] [0xc00016faf0 0xc00016fd38 0xc00016fe78] [0xc00016fc90 0xc00016fe20] [0x935700 0x935700] 0xc00118d5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:24:40.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:24:40.568: INFO: rc: 1 Jun 4 11:24:40.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001068420 exit status 1 true [0xc00016fe90 0xc00177a000 0xc00177a018] [0xc00016fe90 0xc00177a000 0xc00177a018] [0xc00016ffd8 0xc00177a010] [0x935700 0x935700] 0xc0019d04e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:24:50.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:24:50.664: INFO: rc: 1 Jun 4 11:24:50.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001fdc150 exit status 1 true [0xc00091a000 0xc00091a030 0xc00091a080] [0xc00091a000 0xc00091a030 0xc00091a080] [0xc00091a028 0xc00091a068] [0x935700 0x935700] 0xc001e16de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:25:00.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:25:00.763: INFO: rc: 1 Jun 4 11:25:00.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc2a0 exit status 1 true [0xc000c76010 0xc000c76030 0xc000c76048] [0xc000c76010 0xc000c76030 0xc000c76048] [0xc000c76028 0xc000c76040] [0x935700 0x935700] 0xc001ba6600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:25:10.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:25:10.889: INFO: rc: 1 Jun 4 11:25:10.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc3f0 exit status 1 true [0xc000c76050 0xc000c760a0 0xc000c760d8] [0xc000c76050 0xc000c760a0 0xc000c760d8] [0xc000c76090 0xc000c760c8] [0x935700 0x935700] 0xc001ba6a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:25:20.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:25:20.980: INFO: rc: 1 Jun 4 11:25:20.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0010685a0 exit status 1 true [0xc00177a020 0xc00177a038 0xc00177a050] [0xc00177a020 0xc00177a038 0xc00177a050] [0xc00177a030 0xc00177a048] [0x935700 0x935700] 0xc0019d09c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:25:30.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:25:31.070: INFO: rc: 1 Jun 4 11:25:31.070: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0010686c0 exit status 1 true [0xc00177a058 0xc00177a070 0xc00177a088] [0xc00177a058 0xc00177a070 0xc00177a088] [0xc00177a068 0xc00177a080] [0x935700 0x935700] 0xc0019d0f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:25:41.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:25:41.155: INFO: rc: 1 Jun 4 11:25:41.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc5d0 exit status 1 true [0xc000c760f0 0xc000c76138 0xc000c76198] [0xc000c760f0 0xc000c76138 0xc000c76198] [0xc000c76118 0xc000c76180] [0x935700 0x935700] 0xc001ba6d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:25:51.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:25:51.248: INFO: rc: 1 Jun 4 11:25:51.248: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001068870 exit status 1 true [0xc00177a090 0xc00177a0a8 0xc00177a0c0] [0xc00177a090 0xc00177a0a8 0xc00177a0c0] [0xc00177a0a0 0xc00177a0b8] [0x935700 0x935700] 0xc0019d1c20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:26:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:26:01.336: INFO: rc: 1 Jun 4 11:26:01.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc840 exit status 1 true [0xc000c761a8 0xc000c761e8 0xc000c76240] [0xc000c761a8 0xc000c761e8 0xc000c76240] [0xc000c761d0 0xc000c76228] [0x935700 0x935700] 0xc001ba70e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:26:11.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:26:11.429: INFO: rc: 1 Jun 4 11:26:11.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc990 exit status 1 true [0xc000c76268 0xc000c762a0 0xc000c762e8] [0xc000c76268 0xc000c762a0 0xc000c762e8] [0xc000c76290 0xc000c762d0] [0x935700 0x935700] 0xc001ba7500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:26:21.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:26:21.531: INFO: rc: 1 Jun 4 11:26:21.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001fdc120 exit status 1 true [0xc00000e150 0xc00016f3a8 0xc00016f4a0] [0xc00000e150 0xc00016f3a8 0xc00016f4a0] [0xc00016e000 0xc00016f408] [0x935700 0x935700] 0xc00118d500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:26:31.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:26:31.627: INFO: rc: 1 Jun 4 11:26:31.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0015cc240 exit status 1 true [0xc00177a000 0xc00177a018 0xc00177a030] [0xc00177a000 0xc00177a018 0xc00177a030] [0xc00177a010 0xc00177a028] [0x935700 0x935700] 0xc0019e8300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Jun 4 11:26:41.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9g7c6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 11:26:41.721: INFO: rc: 1 Jun 4 11:26:41.721: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Jun 4 11:26:41.721: INFO: Scaling statefulset ss to 0 Jun 4 11:26:41.726: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 4 11:26:41.728: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9g7c6 Jun 4 11:26:41.730: INFO: Scaling statefulset ss to 0 Jun 4 11:26:41.736: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 11:26:41.737: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:26:41.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9g7c6" for this suite. Jun 4 11:26:47.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:26:47.804: INFO: namespace: e2e-tests-statefulset-9g7c6, resource: bindings, ignored listing per whitelist Jun 4 11:26:47.885: INFO: namespace e2e-tests-statefulset-9g7c6 deletion completed in 6.130862587s • [SLOW TEST:361.646 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:26:47.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:26:48.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-7klbv" to be "success or failure" Jun 4 11:26:48.019: INFO: Pod "downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007476ms Jun 4 11:26:50.023: INFO: Pod "downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016758731s Jun 4 11:26:52.028: INFO: Pod "downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02112914s STEP: Saw pod success Jun 4 11:26:52.028: INFO: Pod "downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:26:52.031: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:26:52.056: INFO: Waiting for pod downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:26:52.060: INFO: Pod downwardapi-volume-472e81b8-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:26:52.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7klbv" for this suite. Jun 4 11:26:58.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:26:58.144: INFO: namespace: e2e-tests-downward-api-7klbv, resource: bindings, ignored listing per whitelist Jun 4 11:26:58.205: INFO: namespace e2e-tests-downward-api-7klbv deletion completed in 6.140950458s • [SLOW TEST:10.320 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:26:58.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 4 11:26:58.376: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:27:04.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-5sjqd" for this suite. Jun 4 11:27:10.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:27:10.434: INFO: namespace: e2e-tests-init-container-5sjqd, resource: bindings, ignored listing per whitelist Jun 4 11:27:10.478: INFO: namespace e2e-tests-init-container-5sjqd deletion completed in 6.09799028s • [SLOW TEST:12.273 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:27:10.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-54a40f9c-a656-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:27:10.602: INFO: Waiting up to 5m0s for pod "pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-qr2sb" to be "success or failure" Jun 4 11:27:10.606: INFO: Pod "pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167511ms Jun 4 11:27:12.610: INFO: Pod "pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007656776s Jun 4 11:27:14.614: INFO: Pod "pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01160304s STEP: Saw pod success Jun 4 11:27:14.614: INFO: Pod "pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:27:14.616: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 11:27:14.759: INFO: Waiting for pod pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:27:14.774: INFO: Pod pod-configmaps-54a68c6e-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:27:14.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qr2sb" for this suite. Jun 4 11:27:20.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:27:20.831: INFO: namespace: e2e-tests-configmap-qr2sb, resource: bindings, ignored listing per whitelist Jun 4 11:27:20.871: INFO: namespace e2e-tests-configmap-qr2sb deletion completed in 6.093666734s • [SLOW TEST:10.392 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:27:20.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5ad8bdf4-a656-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:27:21.053: INFO: Waiting up to 5m0s for pod "pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-9kwd9" to be "success or failure" Jun 4 11:27:21.063: INFO: Pod "pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.480675ms Jun 4 11:27:23.110: INFO: Pod "pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057346784s Jun 4 11:27:25.115: INFO: Pod "pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06166189s STEP: Saw pod success Jun 4 11:27:25.115: INFO: Pod "pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:27:25.118: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018 container secret-env-test: STEP: delete the pod Jun 4 11:27:25.286: INFO: Waiting for pod pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:27:25.367: INFO: Pod pod-secrets-5ade8d34-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:27:25.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-9kwd9" for this suite. Jun 4 11:27:31.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:27:31.468: INFO: namespace: e2e-tests-secrets-9kwd9, resource: bindings, ignored listing per whitelist Jun 4 11:27:31.486: INFO: namespace e2e-tests-secrets-9kwd9 deletion completed in 6.115208665s • [SLOW TEST:10.615 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:27:31.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 4 11:27:41.668: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:41.668: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:41.706526 6 log.go:172] (0xc0026822c0) (0xc001d00fa0) Create stream I0604 11:27:41.706556 6 log.go:172] (0xc0026822c0) (0xc001d00fa0) Stream added, broadcasting: 1 I0604 11:27:41.711425 6 log.go:172] (0xc0026822c0) Reply frame received for 1 I0604 11:27:41.711558 6 log.go:172] (0xc0026822c0) (0xc002764140) Create stream I0604 11:27:41.711629 6 log.go:172] (0xc0026822c0) (0xc002764140) Stream added, broadcasting: 3 I0604 11:27:41.713796 6 log.go:172] (0xc0026822c0) Reply frame received for 3 I0604 11:27:41.713868 6 log.go:172] (0xc0026822c0) (0xc001e44460) Create stream I0604 11:27:41.713905 6 log.go:172] (0xc0026822c0) (0xc001e44460) Stream added, broadcasting: 5 I0604 11:27:41.715566 6 log.go:172] (0xc0026822c0) Reply frame received for 5 I0604 11:27:41.793971 6 log.go:172] (0xc0026822c0) Data frame received for 5 I0604 11:27:41.794011 6 log.go:172] (0xc001e44460) (5) Data frame handling I0604 11:27:41.794050 6 log.go:172] (0xc0026822c0) Data frame received for 3 I0604 11:27:41.794081 6 log.go:172] (0xc002764140) (3) Data frame handling I0604 11:27:41.794115 6 log.go:172] (0xc002764140) (3) Data frame sent I0604 11:27:41.794132 6 log.go:172] (0xc0026822c0) Data frame received for 3 I0604 11:27:41.794143 6 log.go:172] (0xc002764140) (3) Data frame handling I0604 11:27:41.795544 6 log.go:172] (0xc0026822c0) Data frame received for 1 I0604 11:27:41.795572 6 log.go:172] (0xc001d00fa0) (1) Data frame handling I0604 11:27:41.795601 6 log.go:172] (0xc001d00fa0) (1) Data frame sent I0604 11:27:41.795623 6 log.go:172] (0xc0026822c0) (0xc001d00fa0) Stream removed, broadcasting: 1 I0604 11:27:41.795709 6 log.go:172] (0xc0026822c0) (0xc001d00fa0) Stream removed, broadcasting: 1 I0604 11:27:41.795728 6 log.go:172] (0xc0026822c0) (0xc002764140) Stream removed, broadcasting: 3 I0604 11:27:41.795751 6 log.go:172] (0xc0026822c0) Go away received I0604 11:27:41.795784 6 log.go:172] (0xc0026822c0) (0xc001e44460) Stream removed, broadcasting: 5 Jun 4 11:27:41.795: INFO: Exec stderr: "" Jun 4 11:27:41.795: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:41.795: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:41.831050 6 log.go:172] (0xc0007c94a0) (0xc001a361e0) Create stream I0604 11:27:41.831087 6 log.go:172] (0xc0007c94a0) (0xc001a361e0) Stream added, broadcasting: 1 I0604 11:27:41.833402 6 log.go:172] (0xc0007c94a0) Reply frame received for 1 I0604 11:27:41.833444 6 log.go:172] (0xc0007c94a0) (0xc0023ea0a0) Create stream I0604 11:27:41.833457 6 log.go:172] (0xc0007c94a0) (0xc0023ea0a0) Stream added, broadcasting: 3 I0604 11:27:41.834410 6 log.go:172] (0xc0007c94a0) Reply frame received for 3 I0604 11:27:41.834447 6 log.go:172] (0xc0007c94a0) (0xc001e98000) Create stream I0604 11:27:41.834460 6 log.go:172] (0xc0007c94a0) (0xc001e98000) Stream added, broadcasting: 5 I0604 11:27:41.835388 6 log.go:172] (0xc0007c94a0) Reply frame received for 5 I0604 11:27:41.891923 6 log.go:172] (0xc0007c94a0) Data frame received for 5 I0604 11:27:41.891956 6 log.go:172] (0xc001e98000) (5) Data frame handling I0604 11:27:41.891990 6 log.go:172] (0xc0007c94a0) Data frame received for 3 I0604 11:27:41.892014 6 log.go:172] (0xc0023ea0a0) (3) Data frame handling I0604 11:27:41.892028 6 log.go:172] (0xc0023ea0a0) (3) Data frame sent I0604 11:27:41.892042 6 log.go:172] (0xc0007c94a0) Data frame received for 3 I0604 11:27:41.892058 6 log.go:172] (0xc0023ea0a0) (3) Data frame handling I0604 11:27:41.894268 6 log.go:172] (0xc0007c94a0) Data frame received for 1 I0604 11:27:41.894304 6 log.go:172] (0xc001a361e0) (1) Data frame handling I0604 11:27:41.894330 6 log.go:172] (0xc001a361e0) (1) Data frame sent I0604 11:27:41.894345 6 log.go:172] (0xc0007c94a0) (0xc001a361e0) Stream removed, broadcasting: 1 I0604 11:27:41.894359 6 log.go:172] (0xc0007c94a0) Go away received I0604 11:27:41.894492 6 log.go:172] (0xc0007c94a0) (0xc001a361e0) Stream removed, broadcasting: 1 I0604 11:27:41.894509 6 log.go:172] (0xc0007c94a0) (0xc0023ea0a0) Stream removed, broadcasting: 3 I0604 11:27:41.894519 6 log.go:172] (0xc0007c94a0) (0xc001e98000) Stream removed, broadcasting: 5 Jun 4 11:27:41.894: INFO: Exec stderr: "" Jun 4 11:27:41.894: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:41.894: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:41.925108 6 log.go:172] (0xc0007c9d90) (0xc001a36460) Create stream I0604 11:27:41.925263 6 log.go:172] (0xc0007c9d90) (0xc001a36460) Stream added, broadcasting: 1 I0604 11:27:41.928515 6 log.go:172] (0xc0007c9d90) Reply frame received for 1 I0604 11:27:41.928553 6 log.go:172] (0xc0007c9d90) (0xc0023ea1e0) Create stream I0604 11:27:41.928570 6 log.go:172] (0xc0007c9d90) (0xc0023ea1e0) Stream added, broadcasting: 3 I0604 11:27:41.929823 6 log.go:172] (0xc0007c9d90) Reply frame received for 3 I0604 11:27:41.929861 6 log.go:172] (0xc0007c9d90) (0xc0025460a0) Create stream I0604 11:27:41.929875 6 log.go:172] (0xc0007c9d90) (0xc0025460a0) Stream added, broadcasting: 5 I0604 11:27:41.930852 6 log.go:172] (0xc0007c9d90) Reply frame received for 5 I0604 11:27:41.996661 6 log.go:172] (0xc0007c9d90) Data frame received for 5 I0604 11:27:41.996705 6 log.go:172] (0xc0025460a0) (5) Data frame handling I0604 11:27:41.996736 6 log.go:172] (0xc0007c9d90) Data frame received for 3 I0604 11:27:41.996750 6 log.go:172] (0xc0023ea1e0) (3) Data frame handling I0604 11:27:41.996763 6 log.go:172] (0xc0023ea1e0) (3) Data frame sent I0604 11:27:41.996783 6 log.go:172] (0xc0007c9d90) Data frame received for 3 I0604 11:27:41.996800 6 log.go:172] (0xc0023ea1e0) (3) Data frame handling I0604 11:27:41.998598 6 log.go:172] (0xc0007c9d90) Data frame received for 1 I0604 11:27:41.998624 6 log.go:172] (0xc001a36460) (1) Data frame handling I0604 11:27:41.998641 6 log.go:172] (0xc001a36460) (1) Data frame sent I0604 11:27:41.998664 6 log.go:172] (0xc0007c9d90) (0xc001a36460) Stream removed, broadcasting: 1 I0604 11:27:41.998690 6 log.go:172] (0xc0007c9d90) Go away received I0604 11:27:41.998795 6 log.go:172] (0xc0007c9d90) (0xc001a36460) Stream removed, broadcasting: 1 I0604 11:27:41.998862 6 log.go:172] (0xc0007c9d90) (0xc0023ea1e0) Stream removed, broadcasting: 3 I0604 11:27:41.998935 6 log.go:172] (0xc0007c9d90) (0xc0025460a0) Stream removed, broadcasting: 5 Jun 4 11:27:41.998: INFO: Exec stderr: "" Jun 4 11:27:41.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:41.999: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.050934 6 log.go:172] (0xc002682370) (0xc0023ea6e0) Create stream I0604 11:27:42.050964 6 log.go:172] (0xc002682370) (0xc0023ea6e0) Stream added, broadcasting: 1 I0604 11:27:42.053623 6 log.go:172] (0xc002682370) Reply frame received for 1 I0604 11:27:42.053680 6 log.go:172] (0xc002682370) (0xc001e980a0) Create stream I0604 11:27:42.053718 6 log.go:172] (0xc002682370) (0xc001e980a0) Stream added, broadcasting: 3 I0604 11:27:42.054622 6 log.go:172] (0xc002682370) Reply frame received for 3 I0604 11:27:42.054659 6 log.go:172] (0xc002682370) (0xc001e981e0) Create stream I0604 11:27:42.054666 6 log.go:172] (0xc002682370) (0xc001e981e0) Stream added, broadcasting: 5 I0604 11:27:42.055577 6 log.go:172] (0xc002682370) Reply frame received for 5 I0604 11:27:42.115786 6 log.go:172] (0xc002682370) Data frame received for 5 I0604 11:27:42.115819 6 log.go:172] (0xc001e981e0) (5) Data frame handling I0604 11:27:42.115841 6 log.go:172] (0xc002682370) Data frame received for 3 I0604 11:27:42.115863 6 log.go:172] (0xc001e980a0) (3) Data frame handling I0604 11:27:42.115880 6 log.go:172] (0xc001e980a0) (3) Data frame sent I0604 11:27:42.115890 6 log.go:172] (0xc002682370) Data frame received for 3 I0604 11:27:42.115898 6 log.go:172] (0xc001e980a0) (3) Data frame handling I0604 11:27:42.116861 6 log.go:172] (0xc002682370) Data frame received for 1 I0604 11:27:42.116875 6 log.go:172] (0xc0023ea6e0) (1) Data frame handling I0604 11:27:42.116882 6 log.go:172] (0xc0023ea6e0) (1) Data frame sent I0604 11:27:42.116894 6 log.go:172] (0xc002682370) (0xc0023ea6e0) Stream removed, broadcasting: 1 I0604 11:27:42.116965 6 log.go:172] (0xc002682370) Go away received I0604 11:27:42.117006 6 log.go:172] (0xc002682370) (0xc0023ea6e0) Stream removed, broadcasting: 1 I0604 11:27:42.117045 6 log.go:172] (0xc002682370) (0xc001e980a0) Stream removed, broadcasting: 3 I0604 11:27:42.117065 6 log.go:172] (0xc002682370) (0xc001e981e0) Stream removed, broadcasting: 5 Jun 4 11:27:42.117: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 4 11:27:42.117: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:42.117: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.144431 6 log.go:172] (0xc0006e3d90) (0xc001e985a0) Create stream I0604 11:27:42.144456 6 log.go:172] (0xc0006e3d90) (0xc001e985a0) Stream added, broadcasting: 1 I0604 11:27:42.146885 6 log.go:172] (0xc0006e3d90) Reply frame received for 1 I0604 11:27:42.146954 6 log.go:172] (0xc0006e3d90) (0xc001a36500) Create stream I0604 11:27:42.146982 6 log.go:172] (0xc0006e3d90) (0xc001a36500) Stream added, broadcasting: 3 I0604 11:27:42.148030 6 log.go:172] (0xc0006e3d90) Reply frame received for 3 I0604 11:27:42.148072 6 log.go:172] (0xc0006e3d90) (0xc001e98640) Create stream I0604 11:27:42.148082 6 log.go:172] (0xc0006e3d90) (0xc001e98640) Stream added, broadcasting: 5 I0604 11:27:42.149348 6 log.go:172] (0xc0006e3d90) Reply frame received for 5 I0604 11:27:42.206878 6 log.go:172] (0xc0006e3d90) Data frame received for 5 I0604 11:27:42.206928 6 log.go:172] (0xc001e98640) (5) Data frame handling I0604 11:27:42.206979 6 log.go:172] (0xc0006e3d90) Data frame received for 3 I0604 11:27:42.207007 6 log.go:172] (0xc001a36500) (3) Data frame handling I0604 11:27:42.207031 6 log.go:172] (0xc001a36500) (3) Data frame sent I0604 11:27:42.207045 6 log.go:172] (0xc0006e3d90) Data frame received for 3 I0604 11:27:42.207055 6 log.go:172] (0xc001a36500) (3) Data frame handling I0604 11:27:42.208374 6 log.go:172] (0xc0006e3d90) Data frame received for 1 I0604 11:27:42.208405 6 log.go:172] (0xc001e985a0) (1) Data frame handling I0604 11:27:42.208428 6 log.go:172] (0xc001e985a0) (1) Data frame sent I0604 11:27:42.208446 6 log.go:172] (0xc0006e3d90) (0xc001e985a0) Stream removed, broadcasting: 1 I0604 11:27:42.208537 6 log.go:172] (0xc0006e3d90) (0xc001e985a0) Stream removed, broadcasting: 1 I0604 11:27:42.208553 6 log.go:172] (0xc0006e3d90) (0xc001a36500) Stream removed, broadcasting: 3 I0604 11:27:42.208686 6 log.go:172] (0xc0006e3d90) Go away received I0604 11:27:42.208758 6 log.go:172] (0xc0006e3d90) (0xc001e98640) Stream removed, broadcasting: 5 Jun 4 11:27:42.208: INFO: Exec stderr: "" Jun 4 11:27:42.208: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:42.209: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.234960 6 log.go:172] (0xc000f422c0) (0xc002546320) Create stream I0604 11:27:42.234998 6 log.go:172] (0xc000f422c0) (0xc002546320) Stream added, broadcasting: 1 I0604 11:27:42.238162 6 log.go:172] (0xc000f422c0) Reply frame received for 1 I0604 11:27:42.238199 6 log.go:172] (0xc000f422c0) (0xc0025463c0) Create stream I0604 11:27:42.238211 6 log.go:172] (0xc000f422c0) (0xc0025463c0) Stream added, broadcasting: 3 I0604 11:27:42.239037 6 log.go:172] (0xc000f422c0) Reply frame received for 3 I0604 11:27:42.239064 6 log.go:172] (0xc000f422c0) (0xc002546460) Create stream I0604 11:27:42.239074 6 log.go:172] (0xc000f422c0) (0xc002546460) Stream added, broadcasting: 5 I0604 11:27:42.239950 6 log.go:172] (0xc000f422c0) Reply frame received for 5 I0604 11:27:42.306583 6 log.go:172] (0xc000f422c0) Data frame received for 5 I0604 11:27:42.306628 6 log.go:172] (0xc002546460) (5) Data frame handling I0604 11:27:42.306651 6 log.go:172] (0xc000f422c0) Data frame received for 3 I0604 11:27:42.306660 6 log.go:172] (0xc0025463c0) (3) Data frame handling I0604 11:27:42.306669 6 log.go:172] (0xc0025463c0) (3) Data frame sent I0604 11:27:42.306677 6 log.go:172] (0xc000f422c0) Data frame received for 3 I0604 11:27:42.306682 6 log.go:172] (0xc0025463c0) (3) Data frame handling I0604 11:27:42.308282 6 log.go:172] (0xc000f422c0) Data frame received for 1 I0604 11:27:42.308338 6 log.go:172] (0xc002546320) (1) Data frame handling I0604 11:27:42.308372 6 log.go:172] (0xc002546320) (1) Data frame sent I0604 11:27:42.308413 6 log.go:172] (0xc000f422c0) (0xc002546320) Stream removed, broadcasting: 1 I0604 11:27:42.308439 6 log.go:172] (0xc000f422c0) Go away received I0604 11:27:42.308748 6 log.go:172] (0xc000f422c0) (0xc002546320) Stream removed, broadcasting: 1 I0604 11:27:42.308769 6 log.go:172] (0xc000f422c0) (0xc0025463c0) Stream removed, broadcasting: 3 I0604 11:27:42.308783 6 log.go:172] (0xc000f422c0) (0xc002546460) Stream removed, broadcasting: 5 Jun 4 11:27:42.308: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 4 11:27:42.308: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:42.308: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.344418 6 log.go:172] (0xc000f42790) (0xc002546780) Create stream I0604 11:27:42.344448 6 log.go:172] (0xc000f42790) (0xc002546780) Stream added, broadcasting: 1 I0604 11:27:42.346287 6 log.go:172] (0xc000f42790) Reply frame received for 1 I0604 11:27:42.346325 6 log.go:172] (0xc000f42790) (0xc002546820) Create stream I0604 11:27:42.346337 6 log.go:172] (0xc000f42790) (0xc002546820) Stream added, broadcasting: 3 I0604 11:27:42.347203 6 log.go:172] (0xc000f42790) Reply frame received for 3 I0604 11:27:42.347242 6 log.go:172] (0xc000f42790) (0xc0025468c0) Create stream I0604 11:27:42.347257 6 log.go:172] (0xc000f42790) (0xc0025468c0) Stream added, broadcasting: 5 I0604 11:27:42.348431 6 log.go:172] (0xc000f42790) Reply frame received for 5 I0604 11:27:42.408385 6 log.go:172] (0xc000f42790) Data frame received for 5 I0604 11:27:42.408424 6 log.go:172] (0xc0025468c0) (5) Data frame handling I0604 11:27:42.408459 6 log.go:172] (0xc000f42790) Data frame received for 3 I0604 11:27:42.408486 6 log.go:172] (0xc002546820) (3) Data frame handling I0604 11:27:42.408511 6 log.go:172] (0xc002546820) (3) Data frame sent I0604 11:27:42.408528 6 log.go:172] (0xc000f42790) Data frame received for 3 I0604 11:27:42.408540 6 log.go:172] (0xc002546820) (3) Data frame handling I0604 11:27:42.409723 6 log.go:172] (0xc000f42790) Data frame received for 1 I0604 11:27:42.409764 6 log.go:172] (0xc002546780) (1) Data frame handling I0604 11:27:42.409784 6 log.go:172] (0xc002546780) (1) Data frame sent I0604 11:27:42.409800 6 log.go:172] (0xc000f42790) (0xc002546780) Stream removed, broadcasting: 1 I0604 11:27:42.409814 6 log.go:172] (0xc000f42790) Go away received I0604 11:27:42.409956 6 log.go:172] (0xc000f42790) (0xc002546780) Stream removed, broadcasting: 1 I0604 11:27:42.409977 6 log.go:172] (0xc000f42790) (0xc002546820) Stream removed, broadcasting: 3 I0604 11:27:42.409993 6 log.go:172] (0xc000f42790) (0xc0025468c0) Stream removed, broadcasting: 5 Jun 4 11:27:42.410: INFO: Exec stderr: "" Jun 4 11:27:42.410: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:42.410: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.433648 6 log.go:172] (0xc000b3c4d0) (0xc001a36780) Create stream I0604 11:27:42.433691 6 log.go:172] (0xc000b3c4d0) (0xc001a36780) Stream added, broadcasting: 1 I0604 11:27:42.435362 6 log.go:172] (0xc000b3c4d0) Reply frame received for 1 I0604 11:27:42.435392 6 log.go:172] (0xc000b3c4d0) (0xc001b4a000) Create stream I0604 11:27:42.435402 6 log.go:172] (0xc000b3c4d0) (0xc001b4a000) Stream added, broadcasting: 3 I0604 11:27:42.436190 6 log.go:172] (0xc000b3c4d0) Reply frame received for 3 I0604 11:27:42.436224 6 log.go:172] (0xc000b3c4d0) (0xc001a36820) Create stream I0604 11:27:42.436236 6 log.go:172] (0xc000b3c4d0) (0xc001a36820) Stream added, broadcasting: 5 I0604 11:27:42.437013 6 log.go:172] (0xc000b3c4d0) Reply frame received for 5 I0604 11:27:42.494738 6 log.go:172] (0xc000b3c4d0) Data frame received for 5 I0604 11:27:42.494768 6 log.go:172] (0xc001a36820) (5) Data frame handling I0604 11:27:42.494823 6 log.go:172] (0xc000b3c4d0) Data frame received for 3 I0604 11:27:42.494872 6 log.go:172] (0xc001b4a000) (3) Data frame handling I0604 11:27:42.494895 6 log.go:172] (0xc001b4a000) (3) Data frame sent I0604 11:27:42.494912 6 log.go:172] (0xc000b3c4d0) Data frame received for 3 I0604 11:27:42.494928 6 log.go:172] (0xc001b4a000) (3) Data frame handling I0604 11:27:42.496614 6 log.go:172] (0xc000b3c4d0) Data frame received for 1 I0604 11:27:42.496643 6 log.go:172] (0xc001a36780) (1) Data frame handling I0604 11:27:42.496676 6 log.go:172] (0xc001a36780) (1) Data frame sent I0604 11:27:42.496700 6 log.go:172] (0xc000b3c4d0) (0xc001a36780) Stream removed, broadcasting: 1 I0604 11:27:42.496724 6 log.go:172] (0xc000b3c4d0) Go away received I0604 11:27:42.496868 6 log.go:172] (0xc000b3c4d0) (0xc001a36780) Stream removed, broadcasting: 1 I0604 11:27:42.496905 6 log.go:172] (0xc000b3c4d0) (0xc001b4a000) Stream removed, broadcasting: 3 I0604 11:27:42.496923 6 log.go:172] (0xc000b3c4d0) (0xc001a36820) Stream removed, broadcasting: 5 Jun 4 11:27:42.496: INFO: Exec stderr: "" Jun 4 11:27:42.496: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:42.497: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.528992 6 log.go:172] (0xc000f42c60) (0xc002546b40) Create stream I0604 11:27:42.529025 6 log.go:172] (0xc000f42c60) (0xc002546b40) Stream added, broadcasting: 1 I0604 11:27:42.531776 6 log.go:172] (0xc000f42c60) Reply frame received for 1 I0604 11:27:42.531819 6 log.go:172] (0xc000f42c60) (0xc001b4a140) Create stream I0604 11:27:42.531832 6 log.go:172] (0xc000f42c60) (0xc001b4a140) Stream added, broadcasting: 3 I0604 11:27:42.532945 6 log.go:172] (0xc000f42c60) Reply frame received for 3 I0604 11:27:42.532989 6 log.go:172] (0xc000f42c60) (0xc001b4a1e0) Create stream I0604 11:27:42.533004 6 log.go:172] (0xc000f42c60) (0xc001b4a1e0) Stream added, broadcasting: 5 I0604 11:27:42.534648 6 log.go:172] (0xc000f42c60) Reply frame received for 5 I0604 11:27:42.599614 6 log.go:172] (0xc000f42c60) Data frame received for 3 I0604 11:27:42.599649 6 log.go:172] (0xc001b4a140) (3) Data frame handling I0604 11:27:42.599657 6 log.go:172] (0xc001b4a140) (3) Data frame sent I0604 11:27:42.599664 6 log.go:172] (0xc000f42c60) Data frame received for 3 I0604 11:27:42.599677 6 log.go:172] (0xc001b4a140) (3) Data frame handling I0604 11:27:42.599691 6 log.go:172] (0xc000f42c60) Data frame received for 5 I0604 11:27:42.599699 6 log.go:172] (0xc001b4a1e0) (5) Data frame handling I0604 11:27:42.601090 6 log.go:172] (0xc000f42c60) Data frame received for 1 I0604 11:27:42.601301 6 log.go:172] (0xc002546b40) (1) Data frame handling I0604 11:27:42.601331 6 log.go:172] (0xc002546b40) (1) Data frame sent I0604 11:27:42.601348 6 log.go:172] (0xc000f42c60) (0xc002546b40) Stream removed, broadcasting: 1 I0604 11:27:42.601376 6 log.go:172] (0xc000f42c60) Go away received I0604 11:27:42.601559 6 log.go:172] (0xc000f42c60) (0xc002546b40) Stream removed, broadcasting: 1 I0604 11:27:42.601598 6 log.go:172] (0xc000f42c60) (0xc001b4a140) Stream removed, broadcasting: 3 I0604 11:27:42.601614 6 log.go:172] (0xc000f42c60) (0xc001b4a1e0) Stream removed, broadcasting: 5 Jun 4 11:27:42.601: INFO: Exec stderr: "" Jun 4 11:27:42.601: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-s8rg4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:27:42.601: INFO: >>> kubeConfig: /root/.kube/config I0604 11:27:42.634013 6 log.go:172] (0xc000f43130) (0xc002546dc0) Create stream I0604 11:27:42.634054 6 log.go:172] (0xc000f43130) (0xc002546dc0) Stream added, broadcasting: 1 I0604 11:27:42.635873 6 log.go:172] (0xc000f43130) Reply frame received for 1 I0604 11:27:42.635898 6 log.go:172] (0xc000f43130) (0xc0023ea780) Create stream I0604 11:27:42.635907 6 log.go:172] (0xc000f43130) (0xc0023ea780) Stream added, broadcasting: 3 I0604 11:27:42.636984 6 log.go:172] (0xc000f43130) Reply frame received for 3 I0604 11:27:42.637015 6 log.go:172] (0xc000f43130) (0xc002546e60) Create stream I0604 11:27:42.637025 6 log.go:172] (0xc000f43130) (0xc002546e60) Stream added, broadcasting: 5 I0604 11:27:42.638536 6 log.go:172] (0xc000f43130) Reply frame received for 5 I0604 11:27:42.703268 6 log.go:172] (0xc000f43130) Data frame received for 5 I0604 11:27:42.703315 6 log.go:172] (0xc002546e60) (5) Data frame handling I0604 11:27:42.703342 6 log.go:172] (0xc000f43130) Data frame received for 3 I0604 11:27:42.703363 6 log.go:172] (0xc0023ea780) (3) Data frame handling I0604 11:27:42.703383 6 log.go:172] (0xc0023ea780) (3) Data frame sent I0604 11:27:42.703409 6 log.go:172] (0xc000f43130) Data frame received for 3 I0604 11:27:42.703421 6 log.go:172] (0xc0023ea780) (3) Data frame handling I0604 11:27:42.704630 6 log.go:172] (0xc000f43130) Data frame received for 1 I0604 11:27:42.704645 6 log.go:172] (0xc002546dc0) (1) Data frame handling I0604 11:27:42.704653 6 log.go:172] (0xc002546dc0) (1) Data frame sent I0604 11:27:42.704664 6 log.go:172] (0xc000f43130) (0xc002546dc0) Stream removed, broadcasting: 1 I0604 11:27:42.704759 6 log.go:172] (0xc000f43130) Go away received I0604 11:27:42.704825 6 log.go:172] (0xc000f43130) (0xc002546dc0) Stream removed, broadcasting: 1 I0604 11:27:42.704865 6 log.go:172] (0xc000f43130) (0xc0023ea780) Stream removed, broadcasting: 3 I0604 11:27:42.704891 6 log.go:172] (0xc000f43130) (0xc002546e60) Stream removed, broadcasting: 5 Jun 4 11:27:42.704: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:27:42.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-s8rg4" for this suite. Jun 4 11:28:32.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:28:32.770: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-s8rg4, resource: bindings, ignored listing per whitelist Jun 4 11:28:32.803: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-s8rg4 deletion completed in 50.093300135s • [SLOW TEST:61.316 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:28:32.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-85b6fb8e-a656-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:28:32.919: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-j2876" to be "success or failure" Jun 4 11:28:32.961: INFO: Pod "pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 42.774504ms Jun 4 11:28:34.966: INFO: Pod "pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047546592s Jun 4 11:28:36.970: INFO: Pod "pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050975741s STEP: Saw pod success Jun 4 11:28:36.970: INFO: Pod "pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:28:36.972: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 11:28:37.001: INFO: Waiting for pod pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:28:37.012: INFO: Pod pod-projected-secrets-85b8ddc1-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:28:37.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j2876" for this suite. Jun 4 11:28:43.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:28:43.053: INFO: namespace: e2e-tests-projected-j2876, resource: bindings, ignored listing per whitelist Jun 4 11:28:43.103: INFO: namespace e2e-tests-projected-j2876 deletion completed in 6.088104995s • [SLOW TEST:10.300 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:28:43.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 4 11:28:51.031: INFO: 8 pods remaining Jun 4 11:28:51.031: INFO: 0 pods has nil DeletionTimestamp Jun 4 11:28:51.031: INFO: Jun 4 11:28:51.734: INFO: 0 pods remaining Jun 4 11:28:51.734: INFO: 0 pods has nil DeletionTimestamp Jun 4 11:28:51.734: INFO: Jun 4 11:28:52.354: INFO: 0 pods remaining Jun 4 11:28:52.354: INFO: 0 pods has nil DeletionTimestamp Jun 4 11:28:52.354: INFO: STEP: Gathering metrics W0604 11:28:53.106184 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 11:28:53.106: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:28:53.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-fnkjg" for this suite. Jun 4 11:28:59.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:28:59.520: INFO: namespace: e2e-tests-gc-fnkjg, resource: bindings, ignored listing per whitelist Jun 4 11:28:59.582: INFO: namespace e2e-tests-gc-fnkjg deletion completed in 6.157045209s • [SLOW TEST:16.479 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:28:59.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 4 11:28:59.694: INFO: Waiting up to 5m0s for pod "pod-95aee2a0-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-tmj4r" to be "success or failure" Jun 4 11:28:59.705: INFO: Pod "pod-95aee2a0-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.12284ms Jun 4 11:29:01.710: INFO: Pod "pod-95aee2a0-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015568775s Jun 4 11:29:03.714: INFO: Pod "pod-95aee2a0-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019400338s STEP: Saw pod success Jun 4 11:29:03.714: INFO: Pod "pod-95aee2a0-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:29:03.716: INFO: Trying to get logs from node hunter-worker2 pod pod-95aee2a0-a656-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:29:03.802: INFO: Waiting for pod pod-95aee2a0-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:29:03.812: INFO: Pod pod-95aee2a0-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:29:03.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tmj4r" for this suite. Jun 4 11:29:09.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:29:09.912: INFO: namespace: e2e-tests-emptydir-tmj4r, resource: bindings, ignored listing per whitelist Jun 4 11:29:09.916: INFO: namespace e2e-tests-emptydir-tmj4r deletion completed in 6.100975816s • [SLOW TEST:10.334 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:29:09.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 11:29:10.025: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 4 11:29:15.030: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 4 11:29:15.030: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 4 11:29:17.035: INFO: Creating deployment "test-rollover-deployment" Jun 4 11:29:17.043: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 4 11:29:19.049: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 4 11:29:19.055: INFO: Ensure that both replica sets have 1 created replica Jun 4 11:29:19.061: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 4 11:29:19.067: INFO: Updating deployment test-rollover-deployment Jun 4 11:29:19.067: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 4 11:29:21.077: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 4 11:29:21.084: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 4 11:29:21.090: INFO: all replica sets need to contain the pod-template-hash label Jun 4 11:29:21.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:29:23.099: INFO: all replica sets need to contain the pod-template-hash label Jun 4 11:29:23.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:29:25.099: INFO: all replica sets need to contain the pod-template-hash label Jun 4 11:29:25.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:29:27.099: INFO: all replica sets need to contain the pod-template-hash label Jun 4 11:29:27.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:29:29.140: INFO: all replica sets need to contain the pod-template-hash label Jun 4 11:29:29.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:29:31.100: INFO: all replica sets need to contain the pod-template-hash label Jun 4 11:29:31.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866962, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726866957, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:29:33.098: INFO: Jun 4 11:29:33.098: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 4 11:29:33.106: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-7k7l7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7k7l7/deployments/test-rollover-deployment,UID:a0060ab0-a656-11ea-99e8-0242ac110002,ResourceVersion:14171424,Generation:2,CreationTimestamp:2020-06-04 11:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-04 11:29:17 +0000 UTC 2020-06-04 11:29:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-04 11:29:32 +0000 UTC 2020-06-04 11:29:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 4 11:29:33.109: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-7k7l7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7k7l7/replicasets/test-rollover-deployment-5b8479fdb6,UID:a13c2cf4-a656-11ea-99e8-0242ac110002,ResourceVersion:14171415,Generation:2,CreationTimestamp:2020-06-04 11:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a0060ab0-a656-11ea-99e8-0242ac110002 0xc001bb8817 0xc001bb8818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 4 11:29:33.109: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 4 11:29:33.109: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-7k7l7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7k7l7/replicasets/test-rollover-controller,UID:9bd41786-a656-11ea-99e8-0242ac110002,ResourceVersion:14171423,Generation:2,CreationTimestamp:2020-06-04 11:29:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a0060ab0-a656-11ea-99e8-0242ac110002 0xc001bb850f 0xc001bb8600}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 4 11:29:33.109: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-7k7l7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7k7l7/replicasets/test-rollover-deployment-58494b7559,UID:a00841c5-a656-11ea-99e8-0242ac110002,ResourceVersion:14171381,Generation:2,CreationTimestamp:2020-06-04 11:29:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a0060ab0-a656-11ea-99e8-0242ac110002 0xc001bb86c7 0xc001bb86c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 4 11:29:33.112: INFO: Pod "test-rollover-deployment-5b8479fdb6-8fj29" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-8fj29,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-7k7l7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7k7l7/pods/test-rollover-deployment-5b8479fdb6-8fj29,UID:a14adf00-a656-11ea-99e8-0242ac110002,ResourceVersion:14171393,Generation:0,CreationTimestamp:2020-06-04 11:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 a13c2cf4-a656-11ea-99e8-0242ac110002 0xc002053d87 0xc002053d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-f6xsg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f6xsg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-f6xsg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002053e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002053e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:29:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:29:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:29:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:29:19 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.53,StartTime:2020-06-04 11:29:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-04 11:29:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://311c9ec5fdbf761959b263392145d77788a7aa3fe2508534106a2f13acd1092b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:29:33.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7k7l7" for this suite. Jun 4 11:29:41.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:29:41.236: INFO: namespace: e2e-tests-deployment-7k7l7, resource: bindings, ignored listing per whitelist Jun 4 11:29:41.242: INFO: namespace e2e-tests-deployment-7k7l7 deletion completed in 8.126244337s • [SLOW TEST:31.326 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:29:41.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 4 11:29:41.369: INFO: Waiting up to 5m0s for pod "downward-api-ae847b85-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-gvt9v" to be "success or failure" Jun 4 11:29:41.373: INFO: Pod "downward-api-ae847b85-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.737993ms Jun 4 11:29:43.376: INFO: Pod "downward-api-ae847b85-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006704726s Jun 4 11:29:45.380: INFO: Pod "downward-api-ae847b85-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010669543s STEP: Saw pod success Jun 4 11:29:45.380: INFO: Pod "downward-api-ae847b85-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:29:45.383: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ae847b85-a656-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 11:29:45.426: INFO: Waiting for pod downward-api-ae847b85-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:29:45.555: INFO: Pod downward-api-ae847b85-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:29:45.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-gvt9v" for this suite. Jun 4 11:29:51.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:29:51.667: INFO: namespace: e2e-tests-downward-api-gvt9v, resource: bindings, ignored listing per whitelist Jun 4 11:29:51.677: INFO: namespace e2e-tests-downward-api-gvt9v deletion completed in 6.117779384s • [SLOW TEST:10.435 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:29:51.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b4baca57-a656-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:29:51.797: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-c8q42" to be "success or failure" Jun 4 11:29:51.802: INFO: Pod "pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.953429ms Jun 4 11:29:53.807: INFO: Pod "pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009845296s Jun 4 11:29:55.812: INFO: Pod "pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014508555s STEP: Saw pod success Jun 4 11:29:55.812: INFO: Pod "pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:29:55.815: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 11:29:55.868: INFO: Waiting for pod pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:29:55.873: INFO: Pod pod-projected-configmaps-b4bcf286-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:29:55.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c8q42" for this suite. Jun 4 11:30:01.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:30:01.930: INFO: namespace: e2e-tests-projected-c8q42, resource: bindings, ignored listing per whitelist Jun 4 11:30:01.995: INFO: namespace e2e-tests-projected-c8q42 deletion completed in 6.117880955s • [SLOW TEST:10.318 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:30:01.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:30:06.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-rbt98" for this suite. Jun 4 11:30:12.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:30:12.199: INFO: namespace: e2e-tests-kubelet-test-rbt98, resource: bindings, ignored listing per whitelist Jun 4 11:30:12.286: INFO: namespace e2e-tests-kubelet-test-rbt98 deletion completed in 6.128943886s • [SLOW TEST:10.290 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:30:12.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 11:30:12.445: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c107f2ba-a656-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00256d8e2), BlockOwnerDeletion:(*bool)(0xc00256d8e3)}} Jun 4 11:30:12.465: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c106b118-a656-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00243c232), BlockOwnerDeletion:(*bool)(0xc00243c233)}} Jun 4 11:30:12.478: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c1073b8c-a656-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0023b859a), BlockOwnerDeletion:(*bool)(0xc0023b859b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:30:17.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-94x8m" for this suite. Jun 4 11:30:23.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:30:23.560: INFO: namespace: e2e-tests-gc-94x8m, resource: bindings, ignored listing per whitelist Jun 4 11:30:23.632: INFO: namespace e2e-tests-gc-94x8m deletion completed in 6.105629632s • [SLOW TEST:11.346 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:30:23.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jun 4 11:30:23.736: INFO: Waiting up to 5m0s for pod "client-containers-c7c5c706-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-containers-4vwpx" to be "success or failure" Jun 4 11:30:23.739: INFO: Pod "client-containers-c7c5c706-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.64636ms Jun 4 11:30:25.744: INFO: Pod "client-containers-c7c5c706-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007894718s Jun 4 11:30:27.748: INFO: Pod "client-containers-c7c5c706-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011855193s STEP: Saw pod success Jun 4 11:30:27.748: INFO: Pod "client-containers-c7c5c706-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:30:27.750: INFO: Trying to get logs from node hunter-worker2 pod client-containers-c7c5c706-a656-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:30:27.806: INFO: Waiting for pod client-containers-c7c5c706-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:30:27.818: INFO: Pod client-containers-c7c5c706-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:30:27.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-4vwpx" for this suite. Jun 4 11:30:33.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:30:33.896: INFO: namespace: e2e-tests-containers-4vwpx, resource: bindings, ignored listing per whitelist Jun 4 11:30:33.908: INFO: namespace e2e-tests-containers-4vwpx deletion completed in 6.086843869s • [SLOW TEST:10.276 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:30:33.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 4 11:30:34.126: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:34.129: INFO: Number of nodes with available pods: 0 Jun 4 11:30:34.129: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:30:35.134: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:35.137: INFO: Number of nodes with available pods: 0 Jun 4 11:30:35.137: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:30:36.134: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:36.138: INFO: Number of nodes with available pods: 0 Jun 4 11:30:36.138: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:30:37.135: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:37.138: INFO: Number of nodes with available pods: 0 Jun 4 11:30:37.138: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:30:38.134: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:38.137: INFO: Number of nodes with available pods: 1 Jun 4 11:30:38.137: INFO: Node hunter-worker2 is running more than one daemon pod Jun 4 11:30:39.134: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:39.138: INFO: Number of nodes with available pods: 2 Jun 4 11:30:39.138: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 4 11:30:39.156: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:30:39.162: INFO: Number of nodes with available pods: 2 Jun 4 11:30:39.162: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-t94s5, will wait for the garbage collector to delete the pods Jun 4 11:30:40.246: INFO: Deleting DaemonSet.extensions daemon-set took: 6.510028ms Jun 4 11:30:40.346: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.190484ms Jun 4 11:30:51.750: INFO: Number of nodes with available pods: 0 Jun 4 11:30:51.750: INFO: Number of running nodes: 0, number of available pods: 0 Jun 4 11:30:51.752: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t94s5/daemonsets","resourceVersion":"14171815"},"items":null} Jun 4 11:30:51.755: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t94s5/pods","resourceVersion":"14171815"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:30:51.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-t94s5" for this suite. Jun 4 11:30:57.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:30:57.879: INFO: namespace: e2e-tests-daemonsets-t94s5, resource: bindings, ignored listing per whitelist Jun 4 11:30:57.883: INFO: namespace e2e-tests-daemonsets-t94s5 deletion completed in 6.114774907s • [SLOW TEST:23.975 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:30:57.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 4 11:30:58.001: INFO: Waiting up to 5m0s for pod "pod-dc330d3b-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-fk6bj" to be "success or failure" Jun 4 11:30:58.019: INFO: Pod "pod-dc330d3b-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.438122ms Jun 4 11:31:00.024: INFO: Pod "pod-dc330d3b-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022849844s Jun 4 11:31:02.028: INFO: Pod "pod-dc330d3b-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02718564s STEP: Saw pod success Jun 4 11:31:02.028: INFO: Pod "pod-dc330d3b-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:31:02.031: INFO: Trying to get logs from node hunter-worker2 pod pod-dc330d3b-a656-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:31:02.092: INFO: Waiting for pod pod-dc330d3b-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:31:02.105: INFO: Pod pod-dc330d3b-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:31:02.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fk6bj" for this suite. Jun 4 11:31:08.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:31:08.204: INFO: namespace: e2e-tests-emptydir-fk6bj, resource: bindings, ignored listing per whitelist Jun 4 11:31:08.235: INFO: namespace e2e-tests-emptydir-fk6bj deletion completed in 6.127684961s • [SLOW TEST:10.352 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:31:08.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e25fde62-a656-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:31:08.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-bf6zr" to be "success or failure" Jun 4 11:31:08.378: INFO: Pod "pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.786564ms Jun 4 11:31:10.396: INFO: Pod "pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022193437s Jun 4 11:31:12.402: INFO: Pod "pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027260347s STEP: Saw pod success Jun 4 11:31:12.402: INFO: Pod "pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:31:12.405: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 11:31:12.427: INFO: Waiting for pod pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018 to disappear Jun 4 11:31:12.431: INFO: Pod pod-configmaps-e26191cb-a656-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:31:12.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bf6zr" for this suite. Jun 4 11:31:18.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:31:18.569: INFO: namespace: e2e-tests-configmap-bf6zr, resource: bindings, ignored listing per whitelist Jun 4 11:31:18.590: INFO: namespace e2e-tests-configmap-bf6zr deletion completed in 6.15480066s • [SLOW TEST:10.354 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:31:18.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-znz5j Jun 4 11:31:22.726: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-znz5j STEP: checking the pod's current state and verifying that restartCount is present Jun 4 11:31:22.730: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:35:23.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-znz5j" for this suite. Jun 4 11:35:29.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:35:29.812: INFO: namespace: e2e-tests-container-probe-znz5j, resource: bindings, ignored listing per whitelist Jun 4 11:35:29.855: INFO: namespace e2e-tests-container-probe-znz5j deletion completed in 6.098598858s • [SLOW TEST:251.265 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:35:29.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018 Jun 4 11:35:29.966: INFO: Pod name my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018: Found 0 pods out of 1 Jun 4 11:35:34.971: INFO: Pod name my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018: Found 1 pods out of 1 Jun 4 11:35:34.971: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018" are running Jun 4 11:35:34.974: INFO: Pod "my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018-vcmm5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 11:35:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 11:35:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 11:35:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 11:35:29 +0000 UTC Reason: Message:}]) Jun 4 11:35:34.974: INFO: Trying to dial the pod Jun 4 11:35:39.987: INFO: Controller my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018: Got expected result from replica 1 [my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018-vcmm5]: "my-hostname-basic-7e4b8fc5-a657-11ea-86dc-0242ac110018-vcmm5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:35:39.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-h6mdn" for this suite. Jun 4 11:35:46.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:35:46.080: INFO: namespace: e2e-tests-replication-controller-h6mdn, resource: bindings, ignored listing per whitelist Jun 4 11:35:46.084: INFO: namespace e2e-tests-replication-controller-h6mdn deletion completed in 6.094588091s • [SLOW TEST:16.229 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:35:46.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 4 11:35:46.218: INFO: Waiting up to 5m0s for pod "pod-87fbb25c-a657-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-q9gm5" to be "success or failure" Jun 4 11:35:46.227: INFO: Pod "pod-87fbb25c-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.104753ms Jun 4 11:35:48.275: INFO: Pod "pod-87fbb25c-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057078969s Jun 4 11:35:50.279: INFO: Pod "pod-87fbb25c-a657-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060593152s STEP: Saw pod success Jun 4 11:35:50.279: INFO: Pod "pod-87fbb25c-a657-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:35:50.280: INFO: Trying to get logs from node hunter-worker pod pod-87fbb25c-a657-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:35:50.316: INFO: Waiting for pod pod-87fbb25c-a657-11ea-86dc-0242ac110018 to disappear Jun 4 11:35:50.334: INFO: Pod pod-87fbb25c-a657-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:35:50.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q9gm5" for this suite. Jun 4 11:35:56.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:35:56.436: INFO: namespace: e2e-tests-emptydir-q9gm5, resource: bindings, ignored listing per whitelist Jun 4 11:35:56.454: INFO: namespace e2e-tests-emptydir-q9gm5 deletion completed in 6.116193694s • [SLOW TEST:10.369 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:35:56.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:35:56.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-ncfdj" to be "success or failure" Jun 4 11:35:56.579: INFO: Pod "downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.133262ms Jun 4 11:35:58.583: INFO: Pod "downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018458919s Jun 4 11:36:00.588: INFO: Pod "downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02316692s STEP: Saw pod success Jun 4 11:36:00.588: INFO: Pod "downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:36:00.591: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:36:00.623: INFO: Waiting for pod downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018 to disappear Jun 4 11:36:00.652: INFO: Pod downwardapi-volume-8e2700fb-a657-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:36:00.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ncfdj" for this suite. Jun 4 11:36:06.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:36:06.681: INFO: namespace: e2e-tests-downward-api-ncfdj, resource: bindings, ignored listing per whitelist Jun 4 11:36:06.748: INFO: namespace e2e-tests-downward-api-ncfdj deletion completed in 6.092386682s • [SLOW TEST:10.294 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:36:06.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:37:06.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5l6ml" for this suite. Jun 4 11:37:28.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:37:28.942: INFO: namespace: e2e-tests-container-probe-5l6ml, resource: bindings, ignored listing per whitelist Jun 4 11:37:29.015: INFO: namespace e2e-tests-container-probe-5l6ml deletion completed in 22.116920421s • [SLOW TEST:82.266 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:37:29.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5skms Jun 4 11:37:33.152: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5skms STEP: checking the pod's current state and verifying that restartCount is present Jun 4 11:37:33.155: INFO: Initial restart count of pod liveness-http is 0 Jun 4 11:37:53.197: INFO: Restart count of pod e2e-tests-container-probe-5skms/liveness-http is now 1 (20.041796966s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:37:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5skms" for this suite. Jun 4 11:37:59.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:37:59.313: INFO: namespace: e2e-tests-container-probe-5skms, resource: bindings, ignored listing per whitelist Jun 4 11:37:59.348: INFO: namespace e2e-tests-container-probe-5skms deletion completed in 6.096082174s • [SLOW TEST:30.333 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:37:59.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:37:59.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-b5vdv" to be "success or failure" Jun 4 11:37:59.481: INFO: Pod "downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218668ms Jun 4 11:38:01.486: INFO: Pod "downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010503473s Jun 4 11:38:03.489: INFO: Pod "downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014188725s STEP: Saw pod success Jun 4 11:38:03.489: INFO: Pod "downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:38:03.492: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:38:03.512: INFO: Waiting for pod downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018 to disappear Jun 4 11:38:03.516: INFO: Pod downwardapi-volume-d764dbd9-a657-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:38:03.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-b5vdv" for this suite. Jun 4 11:38:09.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:38:09.632: INFO: namespace: e2e-tests-downward-api-b5vdv, resource: bindings, ignored listing per whitelist Jun 4 11:38:09.639: INFO: namespace e2e-tests-downward-api-b5vdv deletion completed in 6.093822923s • [SLOW TEST:10.290 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:38:09.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jun 4 11:38:09.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:12.354: INFO: stderr: "" Jun 4 11:38:12.354: INFO: stdout: "pod/pause created\n" Jun 4 11:38:12.354: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 4 11:38:12.354: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-plnhz" to be "running and ready" Jun 4 11:38:12.380: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 25.716133ms Jun 4 11:38:14.413: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058802052s Jun 4 11:38:16.417: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.063066241s Jun 4 11:38:16.418: INFO: Pod "pause" satisfied condition "running and ready" Jun 4 11:38:16.418: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jun 4 11:38:16.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:16.514: INFO: stderr: "" Jun 4 11:38:16.514: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 4 11:38:16.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:16.606: INFO: stderr: "" Jun 4 11:38:16.606: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 4 11:38:16.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:16.702: INFO: stderr: "" Jun 4 11:38:16.702: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 4 11:38:16.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:16.792: INFO: stderr: "" Jun 4 11:38:16.792: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jun 4 11:38:16.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:16.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 4 11:38:16.912: INFO: stdout: "pod \"pause\" force deleted\n" Jun 4 11:38:16.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-plnhz' Jun 4 11:38:17.006: INFO: stderr: "No resources found.\n" Jun 4 11:38:17.006: INFO: stdout: "" Jun 4 11:38:17.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-plnhz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 4 11:38:17.193: INFO: stderr: "" Jun 4 11:38:17.193: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:38:17.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-plnhz" for this suite. Jun 4 11:38:23.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:38:23.348: INFO: namespace: e2e-tests-kubectl-plnhz, resource: bindings, ignored listing per whitelist Jun 4 11:38:23.353: INFO: namespace e2e-tests-kubectl-plnhz deletion completed in 6.156358524s • [SLOW TEST:13.714 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:38:23.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:38:23.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-v7mjn" to be "success or failure" Jun 4 11:38:23.521: INFO: Pod "downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.629088ms Jun 4 11:38:25.525: INFO: Pod "downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037163362s Jun 4 11:38:27.529: INFO: Pod "downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041121185s STEP: Saw pod success Jun 4 11:38:27.529: INFO: Pod "downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:38:27.531: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:38:27.594: INFO: Waiting for pod downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018 to disappear Jun 4 11:38:27.707: INFO: Pod downwardapi-volume-e5b52d98-a657-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:38:27.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v7mjn" for this suite. Jun 4 11:38:33.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:38:33.803: INFO: namespace: e2e-tests-downward-api-v7mjn, resource: bindings, ignored listing per whitelist Jun 4 11:38:33.824: INFO: namespace e2e-tests-downward-api-v7mjn deletion completed in 6.113882384s • [SLOW TEST:10.471 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:38:33.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:38:33.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-p5c94" to be "success or failure" Jun 4 11:38:33.988: INFO: Pod "downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.414611ms Jun 4 11:38:35.993: INFO: Pod "downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042500111s Jun 4 11:38:37.998: INFO: Pod "downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046580666s STEP: Saw pod success Jun 4 11:38:37.998: INFO: Pod "downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:38:38.001: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:38:38.022: INFO: Waiting for pod downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018 to disappear Jun 4 11:38:38.039: INFO: Pod downwardapi-volume-ebf675ae-a657-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:38:38.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p5c94" for this suite. Jun 4 11:38:44.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:38:44.114: INFO: namespace: e2e-tests-downward-api-p5c94, resource: bindings, ignored listing per whitelist Jun 4 11:38:44.152: INFO: namespace e2e-tests-downward-api-p5c94 deletion completed in 6.109182351s • [SLOW TEST:10.327 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:38:44.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-nr8sr/configmap-test-f2203622-a657-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:38:44.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-nr8sr" to be "success or failure" Jun 4 11:38:44.299: INFO: Pod "pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088357ms Jun 4 11:38:46.303: INFO: Pod "pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01022285s Jun 4 11:38:48.308: INFO: Pod "pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014454552s STEP: Saw pod success Jun 4 11:38:48.308: INFO: Pod "pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:38:48.310: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018 container env-test: STEP: delete the pod Jun 4 11:38:48.348: INFO: Waiting for pod pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018 to disappear Jun 4 11:38:48.363: INFO: Pod pod-configmaps-f2214b71-a657-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:38:48.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nr8sr" for this suite. Jun 4 11:38:54.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:38:54.439: INFO: namespace: e2e-tests-configmap-nr8sr, resource: bindings, ignored listing per whitelist Jun 4 11:38:54.463: INFO: namespace e2e-tests-configmap-nr8sr deletion completed in 6.097513063s • [SLOW TEST:10.311 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:38:54.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 11:38:54.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lg45h' Jun 4 11:38:54.668: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 4 11:38:54.668: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jun 4 11:38:56.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lg45h' Jun 4 11:38:56.871: INFO: stderr: "" Jun 4 11:38:56.871: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:38:56.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lg45h" for this suite. Jun 4 11:40:18.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:40:19.021: INFO: namespace: e2e-tests-kubectl-lg45h, resource: bindings, ignored listing per whitelist Jun 4 11:40:19.021: INFO: namespace e2e-tests-kubectl-lg45h deletion completed in 1m22.146436598s • [SLOW TEST:84.558 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:40:19.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2aa57fb1-a658-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:40:19.199: INFO: Waiting up to 5m0s for pod "pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-vqrjr" to be "success or failure" Jun 4 11:40:19.202: INFO: Pod "pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637529ms Jun 4 11:40:21.206: INFO: Pod "pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006297068s Jun 4 11:40:23.210: INFO: Pod "pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010840491s STEP: Saw pod success Jun 4 11:40:23.210: INFO: Pod "pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:40:23.213: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 11:40:23.233: INFO: Waiting for pod pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:40:23.259: INFO: Pod pod-configmaps-2aa7f174-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:40:23.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vqrjr" for this suite. Jun 4 11:40:29.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:40:29.331: INFO: namespace: e2e-tests-configmap-vqrjr, resource: bindings, ignored listing per whitelist Jun 4 11:40:29.368: INFO: namespace e2e-tests-configmap-vqrjr deletion completed in 6.104860156s • [SLOW TEST:10.346 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:40:29.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jun 4 11:40:29.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-mhdcp run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 4 11:40:32.525: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0604 11:40:32.428349 3031 log.go:172] (0xc000936370) (0xc0005edd60) Create stream\nI0604 11:40:32.428410 3031 log.go:172] (0xc000936370) (0xc0005edd60) Stream added, broadcasting: 1\nI0604 11:40:32.431070 3031 log.go:172] (0xc000936370) Reply frame received for 1\nI0604 11:40:32.431108 3031 log.go:172] (0xc000936370) (0xc0002fa000) Create stream\nI0604 11:40:32.431122 3031 log.go:172] (0xc000936370) (0xc0002fa000) Stream added, broadcasting: 3\nI0604 11:40:32.432037 3031 log.go:172] (0xc000936370) Reply frame received for 3\nI0604 11:40:32.432109 3031 log.go:172] (0xc000936370) (0xc0006c45a0) Create stream\nI0604 11:40:32.432128 3031 log.go:172] (0xc000936370) (0xc0006c45a0) Stream added, broadcasting: 5\nI0604 11:40:32.433303 3031 log.go:172] (0xc000936370) Reply frame received for 5\nI0604 11:40:32.433357 3031 log.go:172] (0xc000936370) (0xc0005ede00) Create stream\nI0604 11:40:32.433381 3031 log.go:172] (0xc000936370) (0xc0005ede00) Stream added, broadcasting: 7\nI0604 11:40:32.434253 3031 log.go:172] (0xc000936370) Reply frame received for 7\nI0604 11:40:32.434420 3031 log.go:172] (0xc0002fa000) (3) Writing data frame\nI0604 11:40:32.434530 3031 log.go:172] (0xc0002fa000) (3) Writing data frame\nI0604 11:40:32.435379 3031 log.go:172] (0xc000936370) Data frame received for 5\nI0604 11:40:32.435405 3031 log.go:172] (0xc0006c45a0) (5) Data frame handling\nI0604 11:40:32.435427 3031 log.go:172] (0xc0006c45a0) (5) Data frame sent\nI0604 11:40:32.435920 3031 log.go:172] (0xc000936370) Data frame received for 5\nI0604 11:40:32.435934 3031 log.go:172] (0xc0006c45a0) (5) Data frame handling\nI0604 11:40:32.435947 3031 log.go:172] (0xc0006c45a0) (5) Data frame sent\nI0604 11:40:32.500221 3031 log.go:172] (0xc000936370) Data frame received for 7\nI0604 11:40:32.500272 3031 log.go:172] (0xc0005ede00) (7) Data frame handling\nI0604 11:40:32.500677 3031 log.go:172] (0xc000936370) Data frame received for 5\nI0604 11:40:32.500727 3031 log.go:172] (0xc000936370) (0xc0002fa000) Stream removed, broadcasting: 3\nI0604 11:40:32.500767 3031 log.go:172] (0xc000936370) Data frame received for 1\nI0604 11:40:32.500825 3031 log.go:172] (0xc0005edd60) (1) Data frame handling\nI0604 11:40:32.500848 3031 log.go:172] (0xc0005edd60) (1) Data frame sent\nI0604 11:40:32.500904 3031 log.go:172] (0xc0006c45a0) (5) Data frame handling\nI0604 11:40:32.500974 3031 log.go:172] (0xc000936370) (0xc0005edd60) Stream removed, broadcasting: 1\nI0604 11:40:32.501061 3031 log.go:172] (0xc000936370) Go away received\nI0604 11:40:32.501248 3031 log.go:172] (0xc000936370) (0xc0005edd60) Stream removed, broadcasting: 1\nI0604 11:40:32.501290 3031 log.go:172] (0xc000936370) (0xc0002fa000) Stream removed, broadcasting: 3\nI0604 11:40:32.501309 3031 log.go:172] (0xc000936370) (0xc0006c45a0) Stream removed, broadcasting: 5\nI0604 11:40:32.501332 3031 log.go:172] (0xc000936370) (0xc0005ede00) Stream removed, broadcasting: 7\n" Jun 4 11:40:32.525: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:40:34.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mhdcp" for this suite. Jun 4 11:40:40.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:40:40.622: INFO: namespace: e2e-tests-kubectl-mhdcp, resource: bindings, ignored listing per whitelist Jun 4 11:40:40.639: INFO: namespace e2e-tests-kubectl-mhdcp deletion completed in 6.10220528s • [SLOW TEST:11.271 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:40:40.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-72f7n [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-72f7n STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-72f7n STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-72f7n STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-72f7n Jun 4 11:40:46.822: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-72f7n, name: ss-0, uid: 3b0af07d-a658-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Jun 4 11:40:47.194: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-72f7n, name: ss-0, uid: 3b0af07d-a658-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 4 11:40:47.200: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-72f7n, name: ss-0, uid: 3b0af07d-a658-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 4 11:40:47.203: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-72f7n STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-72f7n STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-72f7n and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 4 11:40:53.275: INFO: Deleting all statefulset in ns e2e-tests-statefulset-72f7n Jun 4 11:40:53.279: INFO: Scaling statefulset ss to 0 Jun 4 11:41:03.299: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 11:41:03.302: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:41:03.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-72f7n" for this suite. Jun 4 11:41:09.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:41:09.396: INFO: namespace: e2e-tests-statefulset-72f7n, resource: bindings, ignored listing per whitelist Jun 4 11:41:09.420: INFO: namespace e2e-tests-statefulset-72f7n deletion completed in 6.100351257s • [SLOW TEST:28.780 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:41:09.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jun 4 11:41:10.093: INFO: created pod pod-service-account-defaultsa Jun 4 11:41:10.093: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 4 11:41:10.103: INFO: created pod pod-service-account-mountsa Jun 4 11:41:10.103: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 4 11:41:10.127: INFO: created pod pod-service-account-nomountsa Jun 4 11:41:10.127: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 4 11:41:10.144: INFO: created pod pod-service-account-defaultsa-mountspec Jun 4 11:41:10.144: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 4 11:41:10.219: INFO: created pod pod-service-account-mountsa-mountspec Jun 4 11:41:10.219: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 4 11:41:10.243: INFO: created pod pod-service-account-nomountsa-mountspec Jun 4 11:41:10.243: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 4 11:41:10.300: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 4 11:41:10.300: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 4 11:41:10.357: INFO: created pod pod-service-account-mountsa-nomountspec Jun 4 11:41:10.357: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 4 11:41:10.394: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 4 11:41:10.394: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:41:10.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-hl4cw" for this suite. Jun 4 11:41:38.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:41:38.592: INFO: namespace: e2e-tests-svcaccounts-hl4cw, resource: bindings, ignored listing per whitelist Jun 4 11:41:38.604: INFO: namespace e2e-tests-svcaccounts-hl4cw deletion completed in 28.10670342s • [SLOW TEST:29.184 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:41:38.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jqbt7 Jun 4 11:41:42.771: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jqbt7 STEP: checking the pod's current state and verifying that restartCount is present Jun 4 11:41:42.774: INFO: Initial restart count of pod liveness-http is 0 Jun 4 11:42:00.870: INFO: Restart count of pod e2e-tests-container-probe-jqbt7/liveness-http is now 1 (18.095780188s elapsed) Jun 4 11:42:21.222: INFO: Restart count of pod e2e-tests-container-probe-jqbt7/liveness-http is now 2 (38.447835858s elapsed) Jun 4 11:42:39.259: INFO: Restart count of pod e2e-tests-container-probe-jqbt7/liveness-http is now 3 (56.485044985s elapsed) Jun 4 11:42:59.306: INFO: Restart count of pod e2e-tests-container-probe-jqbt7/liveness-http is now 4 (1m16.531267145s elapsed) Jun 4 11:44:03.572: INFO: Restart count of pod e2e-tests-container-probe-jqbt7/liveness-http is now 5 (2m20.798056013s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:44:03.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jqbt7" for this suite. Jun 4 11:44:09.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:44:09.691: INFO: namespace: e2e-tests-container-probe-jqbt7, resource: bindings, ignored listing per whitelist Jun 4 11:44:09.700: INFO: namespace e2e-tests-container-probe-jqbt7 deletion completed in 6.093854913s • [SLOW TEST:151.095 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:44:09.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-b4296b7f-a658-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:44:09.828: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-znn2r" to be "success or failure" Jun 4 11:44:09.846: INFO: Pod "pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.147588ms Jun 4 11:44:11.851: INFO: Pod "pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022351472s Jun 4 11:44:13.854: INFO: Pod "pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026174968s STEP: Saw pod success Jun 4 11:44:13.855: INFO: Pod "pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:44:13.858: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 4 11:44:13.876: INFO: Waiting for pod pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:44:13.880: INFO: Pod pod-projected-secrets-b42a0356-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:44:13.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-znn2r" for this suite. Jun 4 11:44:19.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:44:19.957: INFO: namespace: e2e-tests-projected-znn2r, resource: bindings, ignored listing per whitelist Jun 4 11:44:19.973: INFO: namespace e2e-tests-projected-znn2r deletion completed in 6.089800678s • [SLOW TEST:10.273 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:44:19.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 4 11:44:27.126: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:44:28.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-84fcr" for this suite. Jun 4 11:44:50.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:44:50.226: INFO: namespace: e2e-tests-replicaset-84fcr, resource: bindings, ignored listing per whitelist Jun 4 11:44:50.260: INFO: namespace e2e-tests-replicaset-84fcr deletion completed in 22.096513096s • [SLOW TEST:30.287 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:44:50.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:44:50.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-7rtxd" to be "success or failure" Jun 4 11:44:50.437: INFO: Pod "downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 53.484872ms Jun 4 11:44:52.441: INFO: Pod "downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057646102s Jun 4 11:44:54.444: INFO: Pod "downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060717033s STEP: Saw pod success Jun 4 11:44:54.444: INFO: Pod "downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:44:54.447: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:44:54.557: INFO: Waiting for pod downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:44:54.635: INFO: Pod downwardapi-volume-cc559bf9-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:44:54.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7rtxd" for this suite. Jun 4 11:45:00.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:45:00.748: INFO: namespace: e2e-tests-projected-7rtxd, resource: bindings, ignored listing per whitelist Jun 4 11:45:00.808: INFO: namespace e2e-tests-projected-7rtxd deletion completed in 6.169399111s • [SLOW TEST:10.547 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:45:00.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-d29b5e03-a658-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:45:00.911: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-zbm6b" to be "success or failure" Jun 4 11:45:00.914: INFO: Pod "pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.478527ms Jun 4 11:45:02.919: INFO: Pod "pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007820147s Jun 4 11:45:04.923: INFO: Pod "pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012095134s STEP: Saw pod success Jun 4 11:45:04.923: INFO: Pod "pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:45:04.926: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 4 11:45:04.950: INFO: Waiting for pod pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:45:04.954: INFO: Pod pod-projected-secrets-d29c868c-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:45:04.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zbm6b" for this suite. Jun 4 11:45:11.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:45:11.095: INFO: namespace: e2e-tests-projected-zbm6b, resource: bindings, ignored listing per whitelist Jun 4 11:45:11.117: INFO: namespace e2e-tests-projected-zbm6b deletion completed in 6.159536038s • [SLOW TEST:10.308 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:45:11.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 4 11:45:11.296: INFO: Waiting up to 5m0s for pod "pod-d8be2e17-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-j7hhh" to be "success or failure" Jun 4 11:45:11.301: INFO: Pod "pod-d8be2e17-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494845ms Jun 4 11:45:13.305: INFO: Pod "pod-d8be2e17-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00796293s Jun 4 11:45:15.309: INFO: Pod "pod-d8be2e17-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012042862s STEP: Saw pod success Jun 4 11:45:15.309: INFO: Pod "pod-d8be2e17-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:45:15.312: INFO: Trying to get logs from node hunter-worker pod pod-d8be2e17-a658-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:45:15.329: INFO: Waiting for pod pod-d8be2e17-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:45:15.340: INFO: Pod pod-d8be2e17-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:45:15.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-j7hhh" for this suite. Jun 4 11:45:21.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:45:21.392: INFO: namespace: e2e-tests-emptydir-j7hhh, resource: bindings, ignored listing per whitelist Jun 4 11:45:21.457: INFO: namespace e2e-tests-emptydir-j7hhh deletion completed in 6.114393508s • [SLOW TEST:10.340 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:45:21.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jun 4 11:45:21.557: INFO: Waiting up to 5m0s for pod "pod-deea8391-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-bchs9" to be "success or failure" Jun 4 11:45:21.561: INFO: Pod "pod-deea8391-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.968605ms Jun 4 11:45:23.566: INFO: Pod "pod-deea8391-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008396061s Jun 4 11:45:25.570: INFO: Pod "pod-deea8391-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013015195s STEP: Saw pod success Jun 4 11:45:25.571: INFO: Pod "pod-deea8391-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:45:25.574: INFO: Trying to get logs from node hunter-worker2 pod pod-deea8391-a658-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:45:25.593: INFO: Waiting for pod pod-deea8391-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:45:25.598: INFO: Pod pod-deea8391-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:45:25.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bchs9" for this suite. Jun 4 11:45:31.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:45:31.645: INFO: namespace: e2e-tests-emptydir-bchs9, resource: bindings, ignored listing per whitelist Jun 4 11:45:31.708: INFO: namespace e2e-tests-emptydir-bchs9 deletion completed in 6.105937428s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:45:31.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e50c9fe7-a658-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:45:31.866: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-lqz2l" to be "success or failure" Jun 4 11:45:31.870: INFO: Pod "pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122998ms Jun 4 11:45:33.940: INFO: Pod "pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074394398s Jun 4 11:45:35.945: INFO: Pod "pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078474103s STEP: Saw pod success Jun 4 11:45:35.945: INFO: Pod "pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:45:35.947: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 11:45:35.983: INFO: Waiting for pod pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018 to disappear Jun 4 11:45:36.036: INFO: Pod pod-projected-configmaps-e50fab2c-a658-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:45:36.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lqz2l" for this suite. Jun 4 11:45:42.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:45:42.104: INFO: namespace: e2e-tests-projected-lqz2l, resource: bindings, ignored listing per whitelist Jun 4 11:45:42.143: INFO: namespace e2e-tests-projected-lqz2l deletion completed in 6.102774107s • [SLOW TEST:10.435 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:45:42.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xlgd9 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 4 11:45:42.258: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 4 11:46:06.453: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.79:8080/dial?request=hostName&protocol=http&host=10.244.2.123&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xlgd9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:46:06.453: INFO: >>> kubeConfig: /root/.kube/config I0604 11:46:06.489357 6 log.go:172] (0xc000b3c4d0) (0xc0027c4960) Create stream I0604 11:46:06.489386 6 log.go:172] (0xc000b3c4d0) (0xc0027c4960) Stream added, broadcasting: 1 I0604 11:46:06.492070 6 log.go:172] (0xc000b3c4d0) Reply frame received for 1 I0604 11:46:06.492129 6 log.go:172] (0xc000b3c4d0) (0xc00272e000) Create stream I0604 11:46:06.492145 6 log.go:172] (0xc000b3c4d0) (0xc00272e000) Stream added, broadcasting: 3 I0604 11:46:06.493491 6 log.go:172] (0xc000b3c4d0) Reply frame received for 3 I0604 11:46:06.493531 6 log.go:172] (0xc000b3c4d0) (0xc00272e140) Create stream I0604 11:46:06.493543 6 log.go:172] (0xc000b3c4d0) (0xc00272e140) Stream added, broadcasting: 5 I0604 11:46:06.494684 6 log.go:172] (0xc000b3c4d0) Reply frame received for 5 I0604 11:46:06.578807 6 log.go:172] (0xc000b3c4d0) Data frame received for 3 I0604 11:46:06.578879 6 log.go:172] (0xc00272e000) (3) Data frame handling I0604 11:46:06.579045 6 log.go:172] (0xc00272e000) (3) Data frame sent I0604 11:46:06.579849 6 log.go:172] (0xc000b3c4d0) Data frame received for 5 I0604 11:46:06.579881 6 log.go:172] (0xc00272e140) (5) Data frame handling I0604 11:46:06.580205 6 log.go:172] (0xc000b3c4d0) Data frame received for 3 I0604 11:46:06.580233 6 log.go:172] (0xc00272e000) (3) Data frame handling I0604 11:46:06.582328 6 log.go:172] (0xc000b3c4d0) Data frame received for 1 I0604 11:46:06.582349 6 log.go:172] (0xc0027c4960) (1) Data frame handling I0604 11:46:06.582360 6 log.go:172] (0xc0027c4960) (1) Data frame sent I0604 11:46:06.582388 6 log.go:172] (0xc000b3c4d0) (0xc0027c4960) Stream removed, broadcasting: 1 I0604 11:46:06.582414 6 log.go:172] (0xc000b3c4d0) Go away received I0604 11:46:06.582525 6 log.go:172] (0xc000b3c4d0) (0xc0027c4960) Stream removed, broadcasting: 1 I0604 11:46:06.582543 6 log.go:172] (0xc000b3c4d0) (0xc00272e000) Stream removed, broadcasting: 3 I0604 11:46:06.582552 6 log.go:172] (0xc000b3c4d0) (0xc00272e140) Stream removed, broadcasting: 5 Jun 4 11:46:06.582: INFO: Waiting for endpoints: map[] Jun 4 11:46:06.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.79:8080/dial?request=hostName&protocol=http&host=10.244.1.78&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xlgd9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 4 11:46:06.585: INFO: >>> kubeConfig: /root/.kube/config I0604 11:46:06.618882 6 log.go:172] (0xc0006e3d90) (0xc00272e640) Create stream I0604 11:46:06.618916 6 log.go:172] (0xc0006e3d90) (0xc00272e640) Stream added, broadcasting: 1 I0604 11:46:06.622195 6 log.go:172] (0xc0006e3d90) Reply frame received for 1 I0604 11:46:06.622239 6 log.go:172] (0xc0006e3d90) (0xc002a683c0) Create stream I0604 11:46:06.622254 6 log.go:172] (0xc0006e3d90) (0xc002a683c0) Stream added, broadcasting: 3 I0604 11:46:06.623157 6 log.go:172] (0xc0006e3d90) Reply frame received for 3 I0604 11:46:06.623192 6 log.go:172] (0xc0006e3d90) (0xc00264a8c0) Create stream I0604 11:46:06.623207 6 log.go:172] (0xc0006e3d90) (0xc00264a8c0) Stream added, broadcasting: 5 I0604 11:46:06.624175 6 log.go:172] (0xc0006e3d90) Reply frame received for 5 I0604 11:46:06.710309 6 log.go:172] (0xc0006e3d90) Data frame received for 3 I0604 11:46:06.710344 6 log.go:172] (0xc002a683c0) (3) Data frame handling I0604 11:46:06.710365 6 log.go:172] (0xc002a683c0) (3) Data frame sent I0604 11:46:06.710788 6 log.go:172] (0xc0006e3d90) Data frame received for 5 I0604 11:46:06.710811 6 log.go:172] (0xc00264a8c0) (5) Data frame handling I0604 11:46:06.710836 6 log.go:172] (0xc0006e3d90) Data frame received for 3 I0604 11:46:06.710850 6 log.go:172] (0xc002a683c0) (3) Data frame handling I0604 11:46:06.712315 6 log.go:172] (0xc0006e3d90) Data frame received for 1 I0604 11:46:06.712337 6 log.go:172] (0xc00272e640) (1) Data frame handling I0604 11:46:06.712354 6 log.go:172] (0xc00272e640) (1) Data frame sent I0604 11:46:06.712367 6 log.go:172] (0xc0006e3d90) (0xc00272e640) Stream removed, broadcasting: 1 I0604 11:46:06.712440 6 log.go:172] (0xc0006e3d90) (0xc00272e640) Stream removed, broadcasting: 1 I0604 11:46:06.712450 6 log.go:172] (0xc0006e3d90) (0xc002a683c0) Stream removed, broadcasting: 3 I0604 11:46:06.712530 6 log.go:172] (0xc0006e3d90) Go away received I0604 11:46:06.712655 6 log.go:172] (0xc0006e3d90) (0xc00264a8c0) Stream removed, broadcasting: 5 Jun 4 11:46:06.712: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:46:06.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xlgd9" for this suite. Jun 4 11:46:28.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:46:28.774: INFO: namespace: e2e-tests-pod-network-test-xlgd9, resource: bindings, ignored listing per whitelist Jun 4 11:46:28.806: INFO: namespace e2e-tests-pod-network-test-xlgd9 deletion completed in 22.089613929s • [SLOW TEST:46.663 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:46:28.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 4 11:46:36.959: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:36.980: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:38.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:38.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:40.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:40.987: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:42.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:42.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:44.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:44.983: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:46.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:46.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:48.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:48.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:50.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:50.985: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:52.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:52.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:54.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:54.985: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:56.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:56.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:46:58.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:46:58.983: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:47:00.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:47:00.984: INFO: Pod pod-with-prestop-exec-hook still exists Jun 4 11:47:02.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 4 11:47:02.984: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:47:02.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-klzxb" for this suite. Jun 4 11:47:25.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:47:25.067: INFO: namespace: e2e-tests-container-lifecycle-hook-klzxb, resource: bindings, ignored listing per whitelist Jun 4 11:47:25.087: INFO: namespace e2e-tests-container-lifecycle-hook-klzxb deletion completed in 22.09284172s • [SLOW TEST:56.280 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:47:25.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:47:29.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-88j24" for this suite. Jun 4 11:48:15.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:48:15.309: INFO: namespace: e2e-tests-kubelet-test-88j24, resource: bindings, ignored listing per whitelist Jun 4 11:48:15.361: INFO: namespace e2e-tests-kubelet-test-88j24 deletion completed in 46.096327571s • [SLOW TEST:50.274 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:48:15.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:48:19.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vgzv9" for this suite. Jun 4 11:48:25.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:48:25.584: INFO: namespace: e2e-tests-emptydir-wrapper-vgzv9, resource: bindings, ignored listing per whitelist Jun 4 11:48:25.648: INFO: namespace e2e-tests-emptydir-wrapper-vgzv9 deletion completed in 6.100347706s • [SLOW TEST:10.286 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:48:25.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 4 11:48:30.345: INFO: Successfully updated pod "labelsupdate4cbb9d05-a659-11ea-86dc-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:48:32.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wqlbs" for this suite. Jun 4 11:48:54.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:48:54.478: INFO: namespace: e2e-tests-downward-api-wqlbs, resource: bindings, ignored listing per whitelist Jun 4 11:48:54.490: INFO: namespace e2e-tests-downward-api-wqlbs deletion completed in 22.109078317s • [SLOW TEST:28.842 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:48:54.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5deabec7-a659-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:48:54.640: INFO: Waiting up to 5m0s for pod "pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-7nwzp" to be "success or failure" Jun 4 11:48:54.653: INFO: Pod "pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.684956ms Jun 4 11:48:56.658: INFO: Pod "pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018487764s Jun 4 11:48:58.662: INFO: Pod "pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022626246s STEP: Saw pod success Jun 4 11:48:58.662: INFO: Pod "pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:48:58.665: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 11:48:58.703: INFO: Waiting for pod pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018 to disappear Jun 4 11:48:58.715: INFO: Pod pod-secrets-5decb92c-a659-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:48:58.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7nwzp" for this suite. Jun 4 11:49:04.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:49:04.782: INFO: namespace: e2e-tests-secrets-7nwzp, resource: bindings, ignored listing per whitelist Jun 4 11:49:04.816: INFO: namespace e2e-tests-secrets-7nwzp deletion completed in 6.097553612s • [SLOW TEST:10.326 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:49:04.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 11:49:04.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-qhxhl" to be "success or failure" Jun 4 11:49:05.046: INFO: Pod "downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 102.304173ms Jun 4 11:49:07.064: INFO: Pod "downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120268723s Jun 4 11:49:09.068: INFO: Pod "downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124459179s STEP: Saw pod success Jun 4 11:49:09.068: INFO: Pod "downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:49:09.071: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 11:49:09.138: INFO: Waiting for pod downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018 to disappear Jun 4 11:49:09.171: INFO: Pod downwardapi-volume-6411574c-a659-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:49:09.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qhxhl" for this suite. Jun 4 11:49:15.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:49:15.250: INFO: namespace: e2e-tests-downward-api-qhxhl, resource: bindings, ignored listing per whitelist Jun 4 11:49:15.268: INFO: namespace e2e-tests-downward-api-qhxhl deletion completed in 6.093953277s • [SLOW TEST:10.452 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:49:15.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 4 11:49:15.375: INFO: Waiting up to 5m0s for pod "downward-api-6a47af5d-a659-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-nf6f7" to be "success or failure" Jun 4 11:49:15.379: INFO: Pod "downward-api-6a47af5d-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037781ms Jun 4 11:49:17.383: INFO: Pod "downward-api-6a47af5d-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008074574s Jun 4 11:49:19.388: INFO: Pod "downward-api-6a47af5d-a659-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012360912s STEP: Saw pod success Jun 4 11:49:19.388: INFO: Pod "downward-api-6a47af5d-a659-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:49:19.391: INFO: Trying to get logs from node hunter-worker2 pod downward-api-6a47af5d-a659-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 11:49:19.426: INFO: Waiting for pod downward-api-6a47af5d-a659-11ea-86dc-0242ac110018 to disappear Jun 4 11:49:19.434: INFO: Pod downward-api-6a47af5d-a659-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:49:19.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nf6f7" for this suite. Jun 4 11:49:25.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:49:25.503: INFO: namespace: e2e-tests-downward-api-nf6f7, resource: bindings, ignored listing per whitelist Jun 4 11:49:25.530: INFO: namespace e2e-tests-downward-api-nf6f7 deletion completed in 6.09199364s • [SLOW TEST:10.261 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:49:25.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 4 11:49:25.615: INFO: Waiting up to 5m0s for pod "pod-7063a54d-a659-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-bkh6c" to be "success or failure" Jun 4 11:49:25.631: INFO: Pod "pod-7063a54d-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.1774ms Jun 4 11:49:27.635: INFO: Pod "pod-7063a54d-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019869762s Jun 4 11:49:29.640: INFO: Pod "pod-7063a54d-a659-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024299301s STEP: Saw pod success Jun 4 11:49:29.640: INFO: Pod "pod-7063a54d-a659-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:49:29.643: INFO: Trying to get logs from node hunter-worker pod pod-7063a54d-a659-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:49:29.680: INFO: Waiting for pod pod-7063a54d-a659-11ea-86dc-0242ac110018 to disappear Jun 4 11:49:29.710: INFO: Pod pod-7063a54d-a659-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:49:29.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-bkh6c" for this suite. Jun 4 11:49:35.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:49:35.782: INFO: namespace: e2e-tests-emptydir-bkh6c, resource: bindings, ignored listing per whitelist Jun 4 11:49:35.809: INFO: namespace e2e-tests-emptydir-bkh6c deletion completed in 6.094971584s • [SLOW TEST:10.279 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:49:35.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 4 11:49:35.934: INFO: Waiting up to 5m0s for pod "downward-api-7686c465-a659-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-v2rtp" to be "success or failure" Jun 4 11:49:35.937: INFO: Pod "downward-api-7686c465-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882648ms Jun 4 11:49:37.941: INFO: Pod "downward-api-7686c465-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006371482s Jun 4 11:49:39.944: INFO: Pod "downward-api-7686c465-a659-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009923659s STEP: Saw pod success Jun 4 11:49:39.944: INFO: Pod "downward-api-7686c465-a659-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:49:39.947: INFO: Trying to get logs from node hunter-worker2 pod downward-api-7686c465-a659-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 11:49:39.968: INFO: Waiting for pod downward-api-7686c465-a659-11ea-86dc-0242ac110018 to disappear Jun 4 11:49:39.991: INFO: Pod downward-api-7686c465-a659-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:49:39.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v2rtp" for this suite. Jun 4 11:49:46.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:49:46.245: INFO: namespace: e2e-tests-downward-api-v2rtp, resource: bindings, ignored listing per whitelist Jun 4 11:49:46.264: INFO: namespace e2e-tests-downward-api-v2rtp deletion completed in 6.127920068s • [SLOW TEST:10.455 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:49:46.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7cc66b9b-a659-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 11:49:46.406: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-zl74c" to be "success or failure" Jun 4 11:49:46.459: INFO: Pod "pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.839712ms Jun 4 11:49:48.463: INFO: Pod "pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056776438s Jun 4 11:49:50.467: INFO: Pod "pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06109569s STEP: Saw pod success Jun 4 11:49:50.467: INFO: Pod "pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:49:50.470: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 11:49:50.523: INFO: Waiting for pod pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018 to disappear Jun 4 11:49:50.561: INFO: Pod pod-projected-configmaps-7cc7cf37-a659-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:49:50.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zl74c" for this suite. Jun 4 11:49:56.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:49:56.591: INFO: namespace: e2e-tests-projected-zl74c, resource: bindings, ignored listing per whitelist Jun 4 11:49:56.669: INFO: namespace e2e-tests-projected-zl74c deletion completed in 6.105130522s • [SLOW TEST:10.405 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:49:56.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-5rq2t Jun 4 11:50:00.798: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-5rq2t STEP: checking the pod's current state and verifying that restartCount is present Jun 4 11:50:00.802: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:54:01.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5rq2t" for this suite. Jun 4 11:54:07.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:54:07.669: INFO: namespace: e2e-tests-container-probe-5rq2t, resource: bindings, ignored listing per whitelist Jun 4 11:54:07.736: INFO: namespace e2e-tests-container-probe-5rq2t deletion completed in 6.110846231s • [SLOW TEST:251.067 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:54:07.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 11:54:07.824: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:54:11.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-s9fwz" for this suite. Jun 4 11:54:49.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:54:49.951: INFO: namespace: e2e-tests-pods-s9fwz, resource: bindings, ignored listing per whitelist Jun 4 11:54:49.993: INFO: namespace e2e-tests-pods-s9fwz deletion completed in 38.11195251s • [SLOW TEST:42.256 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:54:49.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 4 11:54:50.094: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 4 11:54:55.099: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:54:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-gtrx4" for this suite. Jun 4 11:55:02.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:55:02.148: INFO: namespace: e2e-tests-replication-controller-gtrx4, resource: bindings, ignored listing per whitelist Jun 4 11:55:02.207: INFO: namespace e2e-tests-replication-controller-gtrx4 deletion completed in 6.087673065s • [SLOW TEST:12.214 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:55:02.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jun 4 11:55:02.819: INFO: Waiting up to 5m0s for pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75" in namespace "e2e-tests-svcaccounts-t4k86" to be "success or failure" Jun 4 11:55:02.849: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75": Phase="Pending", Reason="", readiness=false. Elapsed: 29.778537ms Jun 4 11:55:04.853: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033815309s Jun 4 11:55:06.856: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037083583s Jun 4 11:55:08.860: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040984732s STEP: Saw pod success Jun 4 11:55:08.860: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75" satisfied condition "success or failure" Jun 4 11:55:08.863: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75 container token-test: STEP: delete the pod Jun 4 11:55:08.899: INFO: Waiting for pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75 to disappear Jun 4 11:55:08.903: INFO: Pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-zts75 no longer exists STEP: Creating a pod to test consume service account root CA Jun 4 11:55:08.906: INFO: Waiting up to 5m0s for pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2" in namespace "e2e-tests-svcaccounts-t4k86" to be "success or failure" Jun 4 11:55:08.908: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.298798ms Jun 4 11:55:10.943: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037083126s Jun 4 11:55:13.087: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180852875s Jun 4 11:55:15.092: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185496142s STEP: Saw pod success Jun 4 11:55:15.092: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2" satisfied condition "success or failure" Jun 4 11:55:15.095: INFO: Trying to get logs from node hunter-worker pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2 container root-ca-test: STEP: delete the pod Jun 4 11:55:15.132: INFO: Waiting for pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2 to disappear Jun 4 11:55:15.137: INFO: Pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-tzvw2 no longer exists STEP: Creating a pod to test consume service account namespace Jun 4 11:55:15.141: INFO: Waiting up to 5m0s for pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs" in namespace "e2e-tests-svcaccounts-t4k86" to be "success or failure" Jun 4 11:55:15.183: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs": Phase="Pending", Reason="", readiness=false. Elapsed: 42.029691ms Jun 4 11:55:17.186: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045466056s Jun 4 11:55:19.191: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049923407s Jun 4 11:55:21.195: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054130325s STEP: Saw pod success Jun 4 11:55:21.195: INFO: Pod "pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs" satisfied condition "success or failure" Jun 4 11:55:21.198: INFO: Trying to get logs from node hunter-worker pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs container namespace-test: STEP: delete the pod Jun 4 11:55:21.235: INFO: Waiting for pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs to disappear Jun 4 11:55:21.250: INFO: Pod pod-service-account-3960d03e-a65a-11ea-86dc-0242ac110018-w8wcs no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:55:21.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-t4k86" for this suite. Jun 4 11:55:27.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:55:27.283: INFO: namespace: e2e-tests-svcaccounts-t4k86, resource: bindings, ignored listing per whitelist Jun 4 11:55:27.346: INFO: namespace e2e-tests-svcaccounts-t4k86 deletion completed in 6.09049239s • [SLOW TEST:25.139 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:55:27.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:55:27.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-52scf" for this suite. Jun 4 11:55:49.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:55:49.512: INFO: namespace: e2e-tests-pods-52scf, resource: bindings, ignored listing per whitelist Jun 4 11:55:49.576: INFO: namespace e2e-tests-pods-52scf deletion completed in 22.104983062s • [SLOW TEST:22.230 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:55:49.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jun 4 11:55:49.698: INFO: Waiting up to 5m0s for pod "client-containers-554f017a-a65a-11ea-86dc-0242ac110018" in namespace "e2e-tests-containers-f8mg9" to be "success or failure" Jun 4 11:55:49.752: INFO: Pod "client-containers-554f017a-a65a-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 53.751834ms Jun 4 11:55:51.756: INFO: Pod "client-containers-554f017a-a65a-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05782249s Jun 4 11:55:53.842: INFO: Pod "client-containers-554f017a-a65a-11ea-86dc-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.143922023s Jun 4 11:55:55.846: INFO: Pod "client-containers-554f017a-a65a-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14781018s STEP: Saw pod success Jun 4 11:55:55.846: INFO: Pod "client-containers-554f017a-a65a-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:55:55.848: INFO: Trying to get logs from node hunter-worker2 pod client-containers-554f017a-a65a-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:55:55.891: INFO: Waiting for pod client-containers-554f017a-a65a-11ea-86dc-0242ac110018 to disappear Jun 4 11:55:55.896: INFO: Pod client-containers-554f017a-a65a-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:55:55.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-f8mg9" for this suite. Jun 4 11:56:01.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:56:01.984: INFO: namespace: e2e-tests-containers-f8mg9, resource: bindings, ignored listing per whitelist Jun 4 11:56:02.014: INFO: namespace e2e-tests-containers-f8mg9 deletion completed in 6.11453209s • [SLOW TEST:12.438 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:56:02.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-cmp5z/secret-test-5cb7e5de-a65a-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 11:56:02.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-cmp5z" to be "success or failure" Jun 4 11:56:02.143: INFO: Pod "pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.097062ms Jun 4 11:56:04.148: INFO: Pod "pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017848038s Jun 4 11:56:06.153: INFO: Pod "pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022862954s STEP: Saw pod success Jun 4 11:56:06.153: INFO: Pod "pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:56:06.156: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018 container env-test: STEP: delete the pod Jun 4 11:56:06.191: INFO: Waiting for pod pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018 to disappear Jun 4 11:56:06.199: INFO: Pod pod-configmaps-5cb86063-a65a-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:56:06.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-cmp5z" for this suite. Jun 4 11:56:12.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:56:12.235: INFO: namespace: e2e-tests-secrets-cmp5z, resource: bindings, ignored listing per whitelist Jun 4 11:56:12.294: INFO: namespace e2e-tests-secrets-cmp5z deletion completed in 6.091424422s • [SLOW TEST:10.280 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:56:12.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jun 4 11:56:16.497: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:56:38.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-tv7fq" for this suite. Jun 4 11:56:44.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:56:44.633: INFO: namespace: e2e-tests-namespaces-tv7fq, resource: bindings, ignored listing per whitelist Jun 4 11:56:44.684: INFO: namespace e2e-tests-namespaces-tv7fq deletion completed in 6.087742387s STEP: Destroying namespace "e2e-tests-nsdeletetest-mrwfk" for this suite. Jun 4 11:56:44.686: INFO: Namespace e2e-tests-nsdeletetest-mrwfk was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-wrhss" for this suite. Jun 4 11:56:50.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:56:50.765: INFO: namespace: e2e-tests-nsdeletetest-wrhss, resource: bindings, ignored listing per whitelist Jun 4 11:56:50.790: INFO: namespace e2e-tests-nsdeletetest-wrhss deletion completed in 6.103709935s • [SLOW TEST:38.496 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:56:50.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-cxsn STEP: Creating a pod to test atomic-volume-subpath Jun 4 11:56:50.962: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cxsn" in namespace "e2e-tests-subpath-p28fk" to be "success or failure" Jun 4 11:56:51.003: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Pending", Reason="", readiness=false. Elapsed: 41.171836ms Jun 4 11:56:53.008: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045632931s Jun 4 11:56:55.013: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050525436s Jun 4 11:56:57.017: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055079686s Jun 4 11:56:59.021: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 8.0594008s Jun 4 11:57:01.025: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 10.063514125s Jun 4 11:57:03.030: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 12.067771773s Jun 4 11:57:05.035: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 14.072771483s Jun 4 11:57:07.039: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 16.077155657s Jun 4 11:57:09.043: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 18.081348745s Jun 4 11:57:11.048: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 20.086247898s Jun 4 11:57:13.053: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 22.090896892s Jun 4 11:57:15.057: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Running", Reason="", readiness=false. Elapsed: 24.095242727s Jun 4 11:57:17.061: INFO: Pod "pod-subpath-test-configmap-cxsn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.099008379s STEP: Saw pod success Jun 4 11:57:17.061: INFO: Pod "pod-subpath-test-configmap-cxsn" satisfied condition "success or failure" Jun 4 11:57:17.063: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-cxsn container test-container-subpath-configmap-cxsn: STEP: delete the pod Jun 4 11:57:17.103: INFO: Waiting for pod pod-subpath-test-configmap-cxsn to disappear Jun 4 11:57:17.110: INFO: Pod pod-subpath-test-configmap-cxsn no longer exists STEP: Deleting pod pod-subpath-test-configmap-cxsn Jun 4 11:57:17.110: INFO: Deleting pod "pod-subpath-test-configmap-cxsn" in namespace "e2e-tests-subpath-p28fk" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:57:17.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-p28fk" for this suite. Jun 4 11:57:23.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:57:23.162: INFO: namespace: e2e-tests-subpath-p28fk, resource: bindings, ignored listing per whitelist Jun 4 11:57:23.192: INFO: namespace e2e-tests-subpath-p28fk deletion completed in 6.076526764s • [SLOW TEST:32.401 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:57:23.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 11:57:23.304: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 4 11:57:23.332: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 4 11:57:28.340: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 4 11:57:28.340: INFO: Creating deployment "test-rolling-update-deployment" Jun 4 11:57:28.344: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 4 11:57:28.357: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 4 11:57:30.362: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 4 11:57:30.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726868648, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726868648, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726868648, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726868648, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 4 11:57:32.396: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 4 11:57:32.407: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-7x6wh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7x6wh/deployments/test-rolling-update-deployment,UID:901eb71e-a65a-11ea-99e8-0242ac110002,ResourceVersion:14176532,Generation:1,CreationTimestamp:2020-06-04 11:57:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-04 11:57:28 +0000 UTC 2020-06-04 11:57:28 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-04 11:57:31 +0000 UTC 2020-06-04 11:57:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 4 11:57:32.410: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-7x6wh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7x6wh/replicasets/test-rolling-update-deployment-75db98fb4c,UID:9021aefd-a65a-11ea-99e8-0242ac110002,ResourceVersion:14176522,Generation:1,CreationTimestamp:2020-06-04 11:57:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 901eb71e-a65a-11ea-99e8-0242ac110002 0xc0008e3bc7 0xc0008e3bc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 4 11:57:32.410: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 4 11:57:32.410: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-7x6wh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7x6wh/replicasets/test-rolling-update-controller,UID:8d1e2be9-a65a-11ea-99e8-0242ac110002,ResourceVersion:14176530,Generation:2,CreationTimestamp:2020-06-04 11:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 901eb71e-a65a-11ea-99e8-0242ac110002 0xc0008e3ae7 0xc0008e3ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 4 11:57:32.413: INFO: Pod "test-rolling-update-deployment-75db98fb4c-g4cxp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-g4cxp,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-7x6wh,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7x6wh/pods/test-rolling-update-deployment-75db98fb4c-g4cxp,UID:90228732-a65a-11ea-99e8-0242ac110002,ResourceVersion:14176521,Generation:0,CreationTimestamp:2020-06-04 11:57:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 9021aefd-a65a-11ea-99e8-0242ac110002 0xc001d9ab37 0xc001d9ab38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5rv9r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5rv9r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-5rv9r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d9abb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d9abd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:57:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:57:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:57:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 11:57:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.136,StartTime:2020-06-04 11:57:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-04 11:57:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://38592ef5e769cac82e49bd91fd47d94c67f6892d9e6ce0e30cfaa9966beca0d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:57:32.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-7x6wh" for this suite. Jun 4 11:57:38.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:57:38.570: INFO: namespace: e2e-tests-deployment-7x6wh, resource: bindings, ignored listing per whitelist Jun 4 11:57:38.633: INFO: namespace e2e-tests-deployment-7x6wh deletion completed in 6.215615782s • [SLOW TEST:15.441 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:57:38.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 4 11:57:38.772: INFO: Waiting up to 5m0s for pod "pod-96558421-a65a-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-h4g8n" to be "success or failure" Jun 4 11:57:38.787: INFO: Pod "pod-96558421-a65a-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.641063ms Jun 4 11:57:40.791: INFO: Pod "pod-96558421-a65a-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018935859s Jun 4 11:57:42.796: INFO: Pod "pod-96558421-a65a-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02372736s STEP: Saw pod success Jun 4 11:57:42.796: INFO: Pod "pod-96558421-a65a-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 11:57:42.799: INFO: Trying to get logs from node hunter-worker pod pod-96558421-a65a-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 11:57:42.818: INFO: Waiting for pod pod-96558421-a65a-11ea-86dc-0242ac110018 to disappear Jun 4 11:57:42.823: INFO: Pod pod-96558421-a65a-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:57:42.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-h4g8n" for this suite. Jun 4 11:57:48.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:57:48.876: INFO: namespace: e2e-tests-emptydir-h4g8n, resource: bindings, ignored listing per whitelist Jun 4 11:57:48.910: INFO: namespace e2e-tests-emptydir-h4g8n deletion completed in 6.084072712s • [SLOW TEST:10.277 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:57:48.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 4 11:57:53.534: INFO: Successfully updated pod "pod-update-9c6ec1cf-a65a-11ea-86dc-0242ac110018" STEP: verifying the updated pod is in kubernetes Jun 4 11:57:53.542: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:57:53.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jwxzn" for this suite. Jun 4 11:58:15.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:58:15.628: INFO: namespace: e2e-tests-pods-jwxzn, resource: bindings, ignored listing per whitelist Jun 4 11:58:15.665: INFO: namespace e2e-tests-pods-jwxzn deletion completed in 22.119727068s • [SLOW TEST:26.755 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:58:15.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 4 11:58:20.328: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ac60b146-a65a-11ea-86dc-0242ac110018" Jun 4 11:58:20.328: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac60b146-a65a-11ea-86dc-0242ac110018" in namespace "e2e-tests-pods-zdb7m" to be "terminated due to deadline exceeded" Jun 4 11:58:20.345: INFO: Pod "pod-update-activedeadlineseconds-ac60b146-a65a-11ea-86dc-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 17.034565ms Jun 4 11:58:22.349: INFO: Pod "pod-update-activedeadlineseconds-ac60b146-a65a-11ea-86dc-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020752432s Jun 4 11:58:22.349: INFO: Pod "pod-update-activedeadlineseconds-ac60b146-a65a-11ea-86dc-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:58:22.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zdb7m" for this suite. Jun 4 11:58:28.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:58:28.425: INFO: namespace: e2e-tests-pods-zdb7m, resource: bindings, ignored listing per whitelist Jun 4 11:58:28.456: INFO: namespace e2e-tests-pods-zdb7m deletion completed in 6.103094697s • [SLOW TEST:12.790 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:58:28.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-q45dh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q45dh to expose endpoints map[] Jun 4 11:58:28.600: INFO: Get endpoints failed (9.308191ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 4 11:58:29.604: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q45dh exposes endpoints map[] (1.013487168s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-q45dh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q45dh to expose endpoints map[pod1:[100]] Jun 4 11:58:34.061: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q45dh exposes endpoints map[pod1:[100]] (4.449856897s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-q45dh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q45dh to expose endpoints map[pod1:[100] pod2:[101]] Jun 4 11:58:37.132: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q45dh exposes endpoints map[pod2:[101] pod1:[100]] (3.0667143s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-q45dh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q45dh to expose endpoints map[pod2:[101]] Jun 4 11:58:38.164: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q45dh exposes endpoints map[pod2:[101]] (1.02873914s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-q45dh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-q45dh to expose endpoints map[] Jun 4 11:58:39.175: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-q45dh exposes endpoints map[] (1.005897672s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:58:39.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-q45dh" for this suite. Jun 4 11:59:01.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:59:01.436: INFO: namespace: e2e-tests-services-q45dh, resource: bindings, ignored listing per whitelist Jun 4 11:59:01.478: INFO: namespace e2e-tests-services-q45dh deletion completed in 22.0952804s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:33.022 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:59:01.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:59:32.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-6vdx2" for this suite. Jun 4 11:59:38.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:59:38.096: INFO: namespace: e2e-tests-container-runtime-6vdx2, resource: bindings, ignored listing per whitelist Jun 4 11:59:38.160: INFO: namespace e2e-tests-container-runtime-6vdx2 deletion completed in 6.102763169s • [SLOW TEST:36.682 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:59:38.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 11:59:38.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-nflcv" for this suite. Jun 4 11:59:44.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 11:59:44.321: INFO: namespace: e2e-tests-services-nflcv, resource: bindings, ignored listing per whitelist Jun 4 11:59:44.375: INFO: namespace e2e-tests-services-nflcv deletion completed in 6.101833052s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.215 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 11:59:44.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 4 11:59:44.576: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:44.579: INFO: Number of nodes with available pods: 0 Jun 4 11:59:44.579: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:45.584: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:45.588: INFO: Number of nodes with available pods: 0 Jun 4 11:59:45.588: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:46.584: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:46.589: INFO: Number of nodes with available pods: 0 Jun 4 11:59:46.589: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:47.583: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:47.586: INFO: Number of nodes with available pods: 0 Jun 4 11:59:47.586: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:48.584: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:48.587: INFO: Number of nodes with available pods: 1 Jun 4 11:59:48.587: INFO: Node hunter-worker2 is running more than one daemon pod Jun 4 11:59:49.584: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:49.594: INFO: Number of nodes with available pods: 2 Jun 4 11:59:49.594: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 4 11:59:49.624: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:49.627: INFO: Number of nodes with available pods: 1 Jun 4 11:59:49.627: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:50.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:50.659: INFO: Number of nodes with available pods: 1 Jun 4 11:59:50.659: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:51.633: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:51.636: INFO: Number of nodes with available pods: 1 Jun 4 11:59:51.636: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:52.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:52.635: INFO: Number of nodes with available pods: 1 Jun 4 11:59:52.635: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:53.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:53.635: INFO: Number of nodes with available pods: 1 Jun 4 11:59:53.635: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:54.632: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:54.635: INFO: Number of nodes with available pods: 1 Jun 4 11:59:54.635: INFO: Node hunter-worker is running more than one daemon pod Jun 4 11:59:55.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 4 11:59:55.635: INFO: Number of nodes with available pods: 2 Jun 4 11:59:55.635: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zw6kw, will wait for the garbage collector to delete the pods Jun 4 11:59:55.698: INFO: Deleting DaemonSet.extensions daemon-set took: 7.215279ms Jun 4 11:59:55.799: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.32622ms Jun 4 12:00:01.302: INFO: Number of nodes with available pods: 0 Jun 4 12:00:01.302: INFO: Number of running nodes: 0, number of available pods: 0 Jun 4 12:00:01.304: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zw6kw/daemonsets","resourceVersion":"14177115"},"items":null} Jun 4 12:00:01.307: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zw6kw/pods","resourceVersion":"14177115"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:00:01.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zw6kw" for this suite. Jun 4 12:00:07.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:00:07.400: INFO: namespace: e2e-tests-daemonsets-zw6kw, resource: bindings, ignored listing per whitelist Jun 4 12:00:07.426: INFO: namespace e2e-tests-daemonsets-zw6kw deletion completed in 6.10757697s • [SLOW TEST:23.051 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:00:07.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jun 4 12:00:07.524: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-9vn84" to be "success or failure" Jun 4 12:00:07.546: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.069002ms Jun 4 12:00:09.550: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02609319s Jun 4 12:00:11.555: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.030694556s Jun 4 12:00:13.559: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035229796s STEP: Saw pod success Jun 4 12:00:13.559: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 4 12:00:13.563: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 4 12:00:13.583: INFO: Waiting for pod pod-host-path-test to disappear Jun 4 12:00:13.587: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:00:13.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-9vn84" for this suite. Jun 4 12:00:19.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:00:19.676: INFO: namespace: e2e-tests-hostpath-9vn84, resource: bindings, ignored listing per whitelist Jun 4 12:00:19.742: INFO: namespace e2e-tests-hostpath-9vn84 deletion completed in 6.129333364s • [SLOW TEST:12.316 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:00:19.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-72mxg in namespace e2e-tests-proxy-6nzt7 I0604 12:00:19.924235 6 runners.go:184] Created replication controller with name: proxy-service-72mxg, namespace: e2e-tests-proxy-6nzt7, replica count: 1 I0604 12:00:20.974697 6 runners.go:184] proxy-service-72mxg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 12:00:21.974936 6 runners.go:184] proxy-service-72mxg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 12:00:22.975175 6 runners.go:184] proxy-service-72mxg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0604 12:00:23.975445 6 runners.go:184] proxy-service-72mxg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 4 12:00:23.995: INFO: setup took 4.117596751s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 4 12:00:24.002: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6nzt7/pods/http:proxy-service-72mxg-djrm7:160/proxy/: foo (200; 6.442567ms) Jun 4 12:00:24.002: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6nzt7/pods/http:proxy-service-72mxg-djrm7:162/proxy/: bar (200; 6.817225ms) Jun 4 12:00:24.002: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6nzt7/services/http:proxy-service-72mxg:portname2/proxy/: bar (200; 7.15654ms) Jun 4 12:00:24.003: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6nzt7/pods/proxy-service-72mxg-djrm7:160/proxy/: foo (200; 7.365536ms) Jun 4 12:00:24.003: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-6nzt7/pods/proxy-service-72mxg-djrm7:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:01:04.071: INFO: Container started at 2020-06-04 12:00:40 +0000 UTC, pod became ready at 2020-06-04 12:01:03 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:01:04.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-nrbtw" for this suite. Jun 4 12:01:26.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:01:26.127: INFO: namespace: e2e-tests-container-probe-nrbtw, resource: bindings, ignored listing per whitelist Jun 4 12:01:26.167: INFO: namespace e2e-tests-container-probe-nrbtw deletion completed in 22.091596952s • [SLOW TEST:48.244 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:01:26.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 4 12:01:26.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177407,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 4 12:01:26.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177407,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 4 12:01:36.323: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177428,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 4 12:01:36.323: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177428,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 4 12:01:46.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177448,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 4 12:01:46.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177448,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 4 12:01:56.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177467,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 4 12:01:56.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-a,UID:1df3b93c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177467,Generation:0,CreationTimestamp:2020-06-04 12:01:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 4 12:02:06.346: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-b,UID:35d1f33c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177487,Generation:0,CreationTimestamp:2020-06-04 12:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 4 12:02:06.346: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-b,UID:35d1f33c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177487,Generation:0,CreationTimestamp:2020-06-04 12:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 4 12:02:16.353: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-b,UID:35d1f33c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177507,Generation:0,CreationTimestamp:2020-06-04 12:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 4 12:02:16.353: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-xdjd4,SelfLink:/api/v1/namespaces/e2e-tests-watch-xdjd4/configmaps/e2e-watch-test-configmap-b,UID:35d1f33c-a65b-11ea-99e8-0242ac110002,ResourceVersion:14177507,Generation:0,CreationTimestamp:2020-06-04 12:02:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:02:26.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-xdjd4" for this suite. Jun 4 12:02:32.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:02:32.401: INFO: namespace: e2e-tests-watch-xdjd4, resource: bindings, ignored listing per whitelist Jun 4 12:02:32.452: INFO: namespace e2e-tests-watch-xdjd4 deletion completed in 6.093734663s • [SLOW TEST:66.285 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:02:32.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 4 12:02:32.566: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:02:40.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-t7hfh" for this suite. Jun 4 12:02:46.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:02:46.397: INFO: namespace: e2e-tests-init-container-t7hfh, resource: bindings, ignored listing per whitelist Jun 4 12:02:46.423: INFO: namespace e2e-tests-init-container-t7hfh deletion completed in 6.092065112s • [SLOW TEST:13.971 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:02:46.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 4 12:02:46.508: INFO: Waiting up to 5m0s for pod "pod-4dc07d97-a65b-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-h8kfr" to be "success or failure" Jun 4 12:02:46.521: INFO: Pod "pod-4dc07d97-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11528ms Jun 4 12:02:48.525: INFO: Pod "pod-4dc07d97-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016464676s Jun 4 12:02:50.529: INFO: Pod "pod-4dc07d97-a65b-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020533495s STEP: Saw pod success Jun 4 12:02:50.529: INFO: Pod "pod-4dc07d97-a65b-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:02:50.532: INFO: Trying to get logs from node hunter-worker pod pod-4dc07d97-a65b-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 12:02:50.575: INFO: Waiting for pod pod-4dc07d97-a65b-11ea-86dc-0242ac110018 to disappear Jun 4 12:02:50.586: INFO: Pod pod-4dc07d97-a65b-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:02:50.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-h8kfr" for this suite. Jun 4 12:02:56.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:02:56.686: INFO: namespace: e2e-tests-emptydir-h8kfr, resource: bindings, ignored listing per whitelist Jun 4 12:02:56.692: INFO: namespace e2e-tests-emptydir-h8kfr deletion completed in 6.102872068s • [SLOW TEST:10.269 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:02:56.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-53e474f8-a65b-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 12:02:56.844: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-c677n" to be "success or failure" Jun 4 12:02:56.906: INFO: Pod "pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 62.299959ms Jun 4 12:02:58.980: INFO: Pod "pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136210004s Jun 4 12:03:00.985: INFO: Pod "pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141327976s STEP: Saw pod success Jun 4 12:03:00.985: INFO: Pod "pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:03:00.990: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018 container projected-secret-volume-test: STEP: delete the pod Jun 4 12:03:01.046: INFO: Waiting for pod pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018 to disappear Jun 4 12:03:01.055: INFO: Pod pod-projected-secrets-53e61a50-a65b-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:03:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c677n" for this suite. Jun 4 12:03:07.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:03:07.123: INFO: namespace: e2e-tests-projected-c677n, resource: bindings, ignored listing per whitelist Jun 4 12:03:07.138: INFO: namespace e2e-tests-projected-c677n deletion completed in 6.080191434s • [SLOW TEST:10.446 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:03:07.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:03:07.306: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jun 4 12:03:07.311: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-47jpn/daemonsets","resourceVersion":"14177706"},"items":null} Jun 4 12:03:07.312: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-47jpn/pods","resourceVersion":"14177706"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:03:07.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-47jpn" for this suite. Jun 4 12:03:13.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:03:13.347: INFO: namespace: e2e-tests-daemonsets-47jpn, resource: bindings, ignored listing per whitelist Jun 4 12:03:13.412: INFO: namespace e2e-tests-daemonsets-47jpn deletion completed in 6.088355959s S [SKIPPING] [6.273 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:03:07.306: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:03:13.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8ntdj;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8ntdj;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8ntdj.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 110.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.110_udp@PTR;check="$$(dig +tcp +noall +answer +search 110.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.110_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8ntdj;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8ntdj.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-8ntdj.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8ntdj.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 110.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.110_udp@PTR;check="$$(dig +tcp +noall +answer +search 110.190.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.190.110_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 4 12:03:19.611: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.630: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.654: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.657: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.660: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.663: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.666: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.669: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.672: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.676: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:19.696: INFO: Lookups using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8ntdj jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc] Jun 4 12:03:24.701: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.718: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.743: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.745: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.748: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.750: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.753: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.756: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.759: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.762: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:24.776: INFO: Lookups using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8ntdj jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc] Jun 4 12:03:29.701: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.722: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.754: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.758: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.760: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.780: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.801: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.804: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.808: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.811: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:29.837: INFO: Lookups using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8ntdj jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc] Jun 4 12:03:34.702: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.724: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.750: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.753: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.756: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.760: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.763: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.767: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.773: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:34.791: INFO: Lookups using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8ntdj jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc] Jun 4 12:03:39.701: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.718: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.737: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.739: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.742: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.745: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.748: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.751: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.753: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.756: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:39.775: INFO: Lookups using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8ntdj jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc] Jun 4 12:03:44.702: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.722: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.746: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.750: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.753: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.757: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.760: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.762: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.765: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.768: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc from pod e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018: the server could not find the requested resource (get pods dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018) Jun 4 12:03:44.787: INFO: Lookups using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8ntdj jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj jessie_udp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@dns-test-service.e2e-tests-dns-8ntdj.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8ntdj.svc] Jun 4 12:03:49.791: INFO: DNS probes using e2e-tests-dns-8ntdj/dns-test-5de2b7e8-a65b-11ea-86dc-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:03:50.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-8ntdj" for this suite. Jun 4 12:03:56.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:03:56.445: INFO: namespace: e2e-tests-dns-8ntdj, resource: bindings, ignored listing per whitelist Jun 4 12:03:56.496: INFO: namespace e2e-tests-dns-8ntdj deletion completed in 6.126743864s • [SLOW TEST:43.084 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:03:56.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:03:56.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-bkj49" to be "success or failure" Jun 4 12:03:56.648: INFO: Pod "downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.635128ms Jun 4 12:03:58.653: INFO: Pod "downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01324786s Jun 4 12:04:00.657: INFO: Pod "downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016740337s STEP: Saw pod success Jun 4 12:04:00.657: INFO: Pod "downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:04:00.660: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:04:00.692: INFO: Waiting for pod downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018 to disappear Jun 4 12:04:00.702: INFO: Pod downwardapi-volume-778f9064-a65b-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:04:00.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bkj49" for this suite. Jun 4 12:04:06.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:04:06.735: INFO: namespace: e2e-tests-downward-api-bkj49, resource: bindings, ignored listing per whitelist Jun 4 12:04:06.794: INFO: namespace e2e-tests-downward-api-bkj49 deletion completed in 6.087126592s • [SLOW TEST:10.297 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:04:06.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-7dac6d88-a65b-11ea-86dc-0242ac110018 STEP: Creating secret with name s-test-opt-upd-7dac6dfa-a65b-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7dac6d88-a65b-11ea-86dc-0242ac110018 STEP: Updating secret s-test-opt-upd-7dac6dfa-a65b-11ea-86dc-0242ac110018 STEP: Creating secret with name s-test-opt-create-7dac6e29-a65b-11ea-86dc-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:05:31.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hk5ct" for this suite. Jun 4 12:05:53.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:05:53.402: INFO: namespace: e2e-tests-secrets-hk5ct, resource: bindings, ignored listing per whitelist Jun 4 12:05:53.479: INFO: namespace e2e-tests-secrets-hk5ct deletion completed in 22.151206758s • [SLOW TEST:106.685 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:05:53.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:05:53.565: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:05:54.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-q59bj" for this suite. Jun 4 12:06:00.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:06:00.766: INFO: namespace: e2e-tests-custom-resource-definition-q59bj, resource: bindings, ignored listing per whitelist Jun 4 12:06:00.862: INFO: namespace e2e-tests-custom-resource-definition-q59bj deletion completed in 6.146429287s • [SLOW TEST:7.383 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:06:00.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jun 4 12:06:00.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:03.310: INFO: stderr: "" Jun 4 12:06:03.310: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 12:06:03.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:03.461: INFO: stderr: "" Jun 4 12:06:03.461: INFO: stdout: "update-demo-nautilus-cgx7m update-demo-nautilus-j28rv " Jun 4 12:06:03.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgx7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:03.551: INFO: stderr: "" Jun 4 12:06:03.551: INFO: stdout: "" Jun 4 12:06:03.551: INFO: update-demo-nautilus-cgx7m is created but not running Jun 4 12:06:08.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:08.658: INFO: stderr: "" Jun 4 12:06:08.658: INFO: stdout: "update-demo-nautilus-cgx7m update-demo-nautilus-j28rv " Jun 4 12:06:08.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgx7m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:08.754: INFO: stderr: "" Jun 4 12:06:08.754: INFO: stdout: "true" Jun 4 12:06:08.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgx7m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:08.858: INFO: stderr: "" Jun 4 12:06:08.858: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 12:06:08.858: INFO: validating pod update-demo-nautilus-cgx7m Jun 4 12:06:08.863: INFO: got data: { "image": "nautilus.jpg" } Jun 4 12:06:08.863: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 12:06:08.863: INFO: update-demo-nautilus-cgx7m is verified up and running Jun 4 12:06:08.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j28rv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:08.960: INFO: stderr: "" Jun 4 12:06:08.960: INFO: stdout: "true" Jun 4 12:06:08.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j28rv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:09.065: INFO: stderr: "" Jun 4 12:06:09.065: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 4 12:06:09.066: INFO: validating pod update-demo-nautilus-j28rv Jun 4 12:06:09.069: INFO: got data: { "image": "nautilus.jpg" } Jun 4 12:06:09.069: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 4 12:06:09.069: INFO: update-demo-nautilus-j28rv is verified up and running STEP: rolling-update to new replication controller Jun 4 12:06:09.071: INFO: scanned /root for discovery docs: Jun 4 12:06:09.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:31.612: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 4 12:06:31.612: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 4 12:06:31.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:31.719: INFO: stderr: "" Jun 4 12:06:31.719: INFO: stdout: "update-demo-kitten-lnwxq update-demo-kitten-znfn8 update-demo-nautilus-j28rv " STEP: Replicas for name=update-demo: expected=2 actual=3 Jun 4 12:06:36.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:36.834: INFO: stderr: "" Jun 4 12:06:36.834: INFO: stdout: "update-demo-kitten-lnwxq update-demo-kitten-znfn8 " Jun 4 12:06:36.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lnwxq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:36.930: INFO: stderr: "" Jun 4 12:06:36.930: INFO: stdout: "true" Jun 4 12:06:36.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lnwxq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:37.018: INFO: stderr: "" Jun 4 12:06:37.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 4 12:06:37.019: INFO: validating pod update-demo-kitten-lnwxq Jun 4 12:06:37.028: INFO: got data: { "image": "kitten.jpg" } Jun 4 12:06:37.028: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 4 12:06:37.028: INFO: update-demo-kitten-lnwxq is verified up and running Jun 4 12:06:37.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-znfn8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:37.139: INFO: stderr: "" Jun 4 12:06:37.139: INFO: stdout: "true" Jun 4 12:06:37.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-znfn8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-26txq' Jun 4 12:06:37.258: INFO: stderr: "" Jun 4 12:06:37.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 4 12:06:37.258: INFO: validating pod update-demo-kitten-znfn8 Jun 4 12:06:37.262: INFO: got data: { "image": "kitten.jpg" } Jun 4 12:06:37.262: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 4 12:06:37.262: INFO: update-demo-kitten-znfn8 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:06:37.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-26txq" for this suite. Jun 4 12:07:01.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:07:01.317: INFO: namespace: e2e-tests-kubectl-26txq, resource: bindings, ignored listing per whitelist Jun 4 12:07:01.352: INFO: namespace e2e-tests-kubectl-26txq deletion completed in 24.08640123s • [SLOW TEST:60.490 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:07:01.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:07:01.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-vlzfl" to be "success or failure" Jun 4 12:07:01.455: INFO: Pod "downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334983ms Jun 4 12:07:03.459: INFO: Pod "downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006798853s Jun 4 12:07:05.463: INFO: Pod "downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0115931s STEP: Saw pod success Jun 4 12:07:05.463: INFO: Pod "downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:07:05.467: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:07:05.487: INFO: Waiting for pod downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018 to disappear Jun 4 12:07:05.491: INFO: Pod downwardapi-volume-e5b45a58-a65b-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:07:05.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vlzfl" for this suite. Jun 4 12:07:11.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:07:11.604: INFO: namespace: e2e-tests-projected-vlzfl, resource: bindings, ignored listing per whitelist Jun 4 12:07:11.607: INFO: namespace e2e-tests-projected-vlzfl deletion completed in 6.113184911s • [SLOW TEST:10.255 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:07:11.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:07:11.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-tnpvw" to be "success or failure" Jun 4 12:07:11.763: INFO: Pod "downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.758005ms Jun 4 12:07:13.792: INFO: Pod "downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047035054s Jun 4 12:07:15.797: INFO: Pod "downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051344997s STEP: Saw pod success Jun 4 12:07:15.797: INFO: Pod "downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:07:15.800: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:07:15.822: INFO: Waiting for pod downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018 to disappear Jun 4 12:07:15.827: INFO: Pod downwardapi-volume-ebda30a9-a65b-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:07:15.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tnpvw" for this suite. Jun 4 12:07:21.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:07:21.885: INFO: namespace: e2e-tests-projected-tnpvw, resource: bindings, ignored listing per whitelist Jun 4 12:07:21.985: INFO: namespace e2e-tests-projected-tnpvw deletion completed in 6.155136873s • [SLOW TEST:10.379 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:07:21.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:07:22.173: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.937233ms) Jun 4 12:07:22.176: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.219256ms) Jun 4 12:07:22.180: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.050498ms) Jun 4 12:07:22.184: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.682525ms) Jun 4 12:07:22.187: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.418899ms) Jun 4 12:07:22.190: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.038341ms) Jun 4 12:07:22.194: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.387822ms) Jun 4 12:07:22.197: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.377931ms) Jun 4 12:07:22.224: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 26.497178ms) Jun 4 12:07:22.228: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.099557ms) Jun 4 12:07:22.232: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.839313ms) Jun 4 12:07:22.236: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.927521ms) Jun 4 12:07:22.240: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.71727ms) Jun 4 12:07:22.244: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.990106ms) Jun 4 12:07:22.248: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.004902ms) Jun 4 12:07:22.251: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.764509ms) Jun 4 12:07:22.255: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.831361ms) Jun 4 12:07:22.259: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.237938ms) Jun 4 12:07:22.261: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.609229ms) Jun 4 12:07:22.264: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.599102ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:07:22.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-8795c" for this suite. Jun 4 12:07:28.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:07:28.327: INFO: namespace: e2e-tests-proxy-8795c, resource: bindings, ignored listing per whitelist Jun 4 12:07:28.364: INFO: namespace e2e-tests-proxy-8795c deletion completed in 6.096717429s • [SLOW TEST:6.378 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:07:28.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 12:07:28.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kk4wm' Jun 4 12:07:28.549: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 4 12:07:28.549: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jun 4 12:07:28.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-kk4wm' Jun 4 12:07:28.687: INFO: stderr: "" Jun 4 12:07:28.687: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:07:28.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kk4wm" for this suite. Jun 4 12:07:34.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:07:34.744: INFO: namespace: e2e-tests-kubectl-kk4wm, resource: bindings, ignored listing per whitelist Jun 4 12:07:34.786: INFO: namespace e2e-tests-kubectl-kk4wm deletion completed in 6.095607947s • [SLOW TEST:6.422 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:07:34.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 4 12:07:39.457: INFO: Successfully updated pod "annotationupdatef9a48f2e-a65b-11ea-86dc-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:07:41.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wt5v4" for this suite. Jun 4 12:08:03.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:08:03.548: INFO: namespace: e2e-tests-downward-api-wt5v4, resource: bindings, ignored listing per whitelist Jun 4 12:08:03.585: INFO: namespace e2e-tests-downward-api-wt5v4 deletion completed in 22.108708015s • [SLOW TEST:28.799 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:08:03.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0604 12:08:43.723382 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 12:08:43.723: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:08:43.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jjvxc" for this suite. Jun 4 12:08:53.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:08:53.773: INFO: namespace: e2e-tests-gc-jjvxc, resource: bindings, ignored listing per whitelist Jun 4 12:08:53.804: INFO: namespace e2e-tests-gc-jjvxc deletion completed in 10.078146042s • [SLOW TEST:50.219 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:08:53.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jun 4 12:08:53.904: INFO: Waiting up to 5m0s for pod "var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-var-expansion-zqbtz" to be "success or failure" Jun 4 12:08:53.907: INFO: Pod "var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.345454ms Jun 4 12:08:55.911: INFO: Pod "var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006777391s Jun 4 12:08:57.915: INFO: Pod "var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011055675s STEP: Saw pod success Jun 4 12:08:57.915: INFO: Pod "var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:08:57.918: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018 container dapi-container: STEP: delete the pod Jun 4 12:08:58.148: INFO: Waiting for pod var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:08:58.213: INFO: Pod var-expansion-28bd5015-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:08:58.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-zqbtz" for this suite. Jun 4 12:09:04.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:09:04.311: INFO: namespace: e2e-tests-var-expansion-zqbtz, resource: bindings, ignored listing per whitelist Jun 4 12:09:04.354: INFO: namespace e2e-tests-var-expansion-zqbtz deletion completed in 6.138105877s • [SLOW TEST:10.550 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:09:04.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:09:04.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-klh8d" to be "success or failure" Jun 4 12:09:04.519: INFO: Pod "downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.430181ms Jun 4 12:09:06.522: INFO: Pod "downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02551798s Jun 4 12:09:08.526: INFO: Pod "downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029706284s STEP: Saw pod success Jun 4 12:09:08.526: INFO: Pod "downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:09:08.530: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:09:08.550: INFO: Waiting for pod downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:09:08.555: INFO: Pod downwardapi-volume-2f098c6b-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:09:08.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-klh8d" for this suite. Jun 4 12:09:14.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:09:14.612: INFO: namespace: e2e-tests-projected-klh8d, resource: bindings, ignored listing per whitelist Jun 4 12:09:14.663: INFO: namespace e2e-tests-projected-klh8d deletion completed in 6.103973642s • [SLOW TEST:10.308 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:09:14.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3533d61e-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 12:09:14.821: INFO: Waiting up to 5m0s for pod "pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-ds8lr" to be "success or failure" Jun 4 12:09:14.824: INFO: Pod "pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.360425ms Jun 4 12:09:16.828: INFO: Pod "pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00733343s Jun 4 12:09:18.832: INFO: Pod "pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011261894s STEP: Saw pod success Jun 4 12:09:18.832: INFO: Pod "pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:09:18.836: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 12:09:18.872: INFO: Waiting for pod pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:09:18.890: INFO: Pod pod-secrets-3534585b-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:09:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ds8lr" for this suite. Jun 4 12:09:24.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:09:24.972: INFO: namespace: e2e-tests-secrets-ds8lr, resource: bindings, ignored listing per whitelist Jun 4 12:09:24.972: INFO: namespace e2e-tests-secrets-ds8lr deletion completed in 6.078703615s • [SLOW TEST:10.310 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:09:24.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:09:30.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-sfgjd" for this suite. Jun 4 12:09:52.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:09:52.251: INFO: namespace: e2e-tests-replication-controller-sfgjd, resource: bindings, ignored listing per whitelist Jun 4 12:09:52.264: INFO: namespace e2e-tests-replication-controller-sfgjd deletion completed in 22.108296095s • [SLOW TEST:27.292 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:09:52.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4b9b896d-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 12:09:52.417: INFO: Waiting up to 5m0s for pod "pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-n9lb2" to be "success or failure" Jun 4 12:09:52.453: INFO: Pod "pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.088412ms Jun 4 12:09:54.537: INFO: Pod "pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120084713s Jun 4 12:09:56.555: INFO: Pod "pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13800456s STEP: Saw pod success Jun 4 12:09:56.555: INFO: Pod "pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:09:56.558: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 12:09:56.596: INFO: Waiting for pod pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:09:56.609: INFO: Pod pod-secrets-4b9c239a-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:09:56.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n9lb2" for this suite. Jun 4 12:10:02.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:10:02.671: INFO: namespace: e2e-tests-secrets-n9lb2, resource: bindings, ignored listing per whitelist Jun 4 12:10:02.710: INFO: namespace e2e-tests-secrets-n9lb2 deletion completed in 6.097181647s • [SLOW TEST:10.445 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:10:02.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:10:02.824: INFO: Creating ReplicaSet my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018 Jun 4 12:10:02.837: INFO: Pod name my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018: Found 0 pods out of 1 Jun 4 12:10:07.841: INFO: Pod name my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018: Found 1 pods out of 1 Jun 4 12:10:07.841: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018" is running Jun 4 12:10:07.845: INFO: Pod "my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018-gzh6h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 12:10:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 12:10:05 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 12:10:05 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-04 12:10:02 +0000 UTC Reason: Message:}]) Jun 4 12:10:07.845: INFO: Trying to dial the pod Jun 4 12:10:12.865: INFO: Controller my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018: Got expected result from replica 1 [my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018-gzh6h]: "my-hostname-basic-51d38e8c-a65c-11ea-86dc-0242ac110018-gzh6h", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:10:12.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-vtrmj" for this suite. Jun 4 12:10:18.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:10:18.961: INFO: namespace: e2e-tests-replicaset-vtrmj, resource: bindings, ignored listing per whitelist Jun 4 12:10:18.978: INFO: namespace e2e-tests-replicaset-vtrmj deletion completed in 6.108079924s • [SLOW TEST:16.267 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:10:18.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-5b847a78-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 12:10:19.111: INFO: Waiting up to 5m0s for pod "pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-npgg2" to be "success or failure" Jun 4 12:10:19.120: INFO: Pod "pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.412261ms Jun 4 12:10:21.124: INFO: Pod "pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013582292s Jun 4 12:10:23.129: INFO: Pod "pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018370375s STEP: Saw pod success Jun 4 12:10:23.129: INFO: Pod "pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:10:23.132: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 12:10:23.294: INFO: Waiting for pod pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:10:23.310: INFO: Pod pod-secrets-5b84e76b-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:10:23.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-npgg2" for this suite. Jun 4 12:10:29.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:10:29.343: INFO: namespace: e2e-tests-secrets-npgg2, resource: bindings, ignored listing per whitelist Jun 4 12:10:29.390: INFO: namespace e2e-tests-secrets-npgg2 deletion completed in 6.076844028s • [SLOW TEST:10.413 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:10:29.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-61c44dc6-a65c-11ea-86dc-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-61c44e3d-a65c-11ea-86dc-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-61c44dc6-a65c-11ea-86dc-0242ac110018 STEP: Updating configmap cm-test-opt-upd-61c44e3d-a65c-11ea-86dc-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-61c44eab-a65c-11ea-86dc-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:12:08.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f7lc6" for this suite. Jun 4 12:12:30.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:12:30.208: INFO: namespace: e2e-tests-projected-f7lc6, resource: bindings, ignored listing per whitelist Jun 4 12:12:30.264: INFO: namespace e2e-tests-projected-f7lc6 deletion completed in 22.084641789s • [SLOW TEST:120.874 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:12:30.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-qzx5t/configmap-test-a9c56653-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 12:12:30.423: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-qzx5t" to be "success or failure" Jun 4 12:12:30.432: INFO: Pod "pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.804736ms Jun 4 12:12:32.436: INFO: Pod "pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013358209s Jun 4 12:12:34.440: INFO: Pod "pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017088665s STEP: Saw pod success Jun 4 12:12:34.440: INFO: Pod "pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:12:34.443: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018 container env-test: STEP: delete the pod Jun 4 12:12:34.463: INFO: Waiting for pod pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:12:34.495: INFO: Pod pod-configmaps-a9c87ff0-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:12:34.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qzx5t" for this suite. Jun 4 12:12:40.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:12:40.667: INFO: namespace: e2e-tests-configmap-qzx5t, resource: bindings, ignored listing per whitelist Jun 4 12:12:40.682: INFO: namespace e2e-tests-configmap-qzx5t deletion completed in 6.183717905s • [SLOW TEST:10.417 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:12:40.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-afff027b-a65c-11ea-86dc-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-afff0263-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin Jun 4 12:12:40.854: INFO: Waiting up to 5m0s for pod "projected-volume-afff0209-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-77dhs" to be "success or failure" Jun 4 12:12:40.858: INFO: Pod "projected-volume-afff0209-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.606828ms Jun 4 12:12:42.929: INFO: Pod "projected-volume-afff0209-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074569068s Jun 4 12:12:44.933: INFO: Pod "projected-volume-afff0209-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078730055s STEP: Saw pod success Jun 4 12:12:44.933: INFO: Pod "projected-volume-afff0209-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:12:44.935: INFO: Trying to get logs from node hunter-worker pod projected-volume-afff0209-a65c-11ea-86dc-0242ac110018 container projected-all-volume-test: STEP: delete the pod Jun 4 12:12:44.975: INFO: Waiting for pod projected-volume-afff0209-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:12:44.979: INFO: Pod projected-volume-afff0209-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:12:44.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-77dhs" for this suite. Jun 4 12:12:50.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:12:51.043: INFO: namespace: e2e-tests-projected-77dhs, resource: bindings, ignored listing per whitelist Jun 4 12:12:51.075: INFO: namespace e2e-tests-projected-77dhs deletion completed in 6.092358296s • [SLOW TEST:10.392 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:12:51.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0604 12:12:52.257291 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 4 12:12:52.257: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:12:52.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rgbvp" for this suite. Jun 4 12:12:58.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:12:58.345: INFO: namespace: e2e-tests-gc-rgbvp, resource: bindings, ignored listing per whitelist Jun 4 12:12:58.415: INFO: namespace e2e-tests-gc-rgbvp deletion completed in 6.094068784s • [SLOW TEST:7.339 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:12:58.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 4 12:12:58.500: INFO: namespace e2e-tests-kubectl-2fhrp Jun 4 12:12:58.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2fhrp' Jun 4 12:12:58.746: INFO: stderr: "" Jun 4 12:12:58.746: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 4 12:12:59.751: INFO: Selector matched 1 pods for map[app:redis] Jun 4 12:12:59.751: INFO: Found 0 / 1 Jun 4 12:13:00.752: INFO: Selector matched 1 pods for map[app:redis] Jun 4 12:13:00.752: INFO: Found 0 / 1 Jun 4 12:13:01.752: INFO: Selector matched 1 pods for map[app:redis] Jun 4 12:13:01.752: INFO: Found 0 / 1 Jun 4 12:13:02.751: INFO: Selector matched 1 pods for map[app:redis] Jun 4 12:13:02.751: INFO: Found 1 / 1 Jun 4 12:13:02.751: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 4 12:13:02.755: INFO: Selector matched 1 pods for map[app:redis] Jun 4 12:13:02.755: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 4 12:13:02.755: INFO: wait on redis-master startup in e2e-tests-kubectl-2fhrp Jun 4 12:13:02.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-m5dhg redis-master --namespace=e2e-tests-kubectl-2fhrp' Jun 4 12:13:02.873: INFO: stderr: "" Jun 4 12:13:02.873: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jun 12:13:01.372 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jun 12:13:01.372 # Server started, Redis version 3.2.12\n1:M 04 Jun 12:13:01.372 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jun 12:13:01.373 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 4 12:13:02.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-2fhrp' Jun 4 12:13:03.012: INFO: stderr: "" Jun 4 12:13:03.013: INFO: stdout: "service/rm2 exposed\n" Jun 4 12:13:03.015: INFO: Service rm2 in namespace e2e-tests-kubectl-2fhrp found. STEP: exposing service Jun 4 12:13:05.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-2fhrp' Jun 4 12:13:05.164: INFO: stderr: "" Jun 4 12:13:05.164: INFO: stdout: "service/rm3 exposed\n" Jun 4 12:13:05.171: INFO: Service rm3 in namespace e2e-tests-kubectl-2fhrp found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:13:07.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2fhrp" for this suite. Jun 4 12:13:29.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:13:29.270: INFO: namespace: e2e-tests-kubectl-2fhrp, resource: bindings, ignored listing per whitelist Jun 4 12:13:29.300: INFO: namespace e2e-tests-kubectl-2fhrp deletion completed in 22.119297158s • [SLOW TEST:30.885 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:13:29.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:13:29.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-d52x6" to be "success or failure" Jun 4 12:13:29.429: INFO: Pod "downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366348ms Jun 4 12:13:31.433: INFO: Pod "downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007638151s Jun 4 12:13:33.437: INFO: Pod "downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011336884s STEP: Saw pod success Jun 4 12:13:33.437: INFO: Pod "downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:13:33.439: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:13:33.477: INFO: Waiting for pod downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:13:33.516: INFO: Pod downwardapi-volume-ccf5edbd-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:13:33.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-d52x6" for this suite. Jun 4 12:13:39.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:13:39.548: INFO: namespace: e2e-tests-downward-api-d52x6, resource: bindings, ignored listing per whitelist Jun 4 12:13:39.613: INFO: namespace e2e-tests-downward-api-d52x6 deletion completed in 6.093680063s • [SLOW TEST:10.313 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:13:39.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-d31c5971-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume secrets Jun 4 12:13:39.750: INFO: Waiting up to 5m0s for pod "pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-secrets-gvw2l" to be "success or failure" Jun 4 12:13:39.760: INFO: Pod "pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.184457ms Jun 4 12:13:41.853: INFO: Pod "pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10317573s Jun 4 12:13:43.857: INFO: Pod "pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107413512s STEP: Saw pod success Jun 4 12:13:43.857: INFO: Pod "pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:13:43.860: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018 container secret-volume-test: STEP: delete the pod Jun 4 12:13:43.882: INFO: Waiting for pod pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:13:43.886: INFO: Pod pod-secrets-d31cd0b1-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:13:43.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-gvw2l" for this suite. Jun 4 12:13:49.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:13:49.972: INFO: namespace: e2e-tests-secrets-gvw2l, resource: bindings, ignored listing per whitelist Jun 4 12:13:49.981: INFO: namespace e2e-tests-secrets-gvw2l deletion completed in 6.092269649s • [SLOW TEST:10.368 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:13:49.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-d94fa0ff-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 12:13:50.164: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-hmntn" to be "success or failure" Jun 4 12:13:50.216: INFO: Pod "pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 51.360798ms Jun 4 12:13:52.220: INFO: Pod "pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055475401s Jun 4 12:13:54.224: INFO: Pod "pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059282848s STEP: Saw pod success Jun 4 12:13:54.224: INFO: Pod "pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:13:54.227: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 12:13:54.244: INFO: Waiting for pod pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:13:54.249: INFO: Pod pod-projected-configmaps-d951dd6b-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:13:54.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hmntn" for this suite. Jun 4 12:14:00.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:14:00.315: INFO: namespace: e2e-tests-projected-hmntn, resource: bindings, ignored listing per whitelist Jun 4 12:14:00.371: INFO: namespace e2e-tests-projected-hmntn deletion completed in 6.118270599s • [SLOW TEST:10.389 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:14:00.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:14:00.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 4 12:14:00.683: INFO: stderr: "" Jun 4 12:14:00.683: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:14:00.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r2854" for this suite. Jun 4 12:14:06.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:14:06.735: INFO: namespace: e2e-tests-kubectl-r2854, resource: bindings, ignored listing per whitelist Jun 4 12:14:06.799: INFO: namespace e2e-tests-kubectl-r2854 deletion completed in 6.111166578s • [SLOW TEST:6.428 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:14:06.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 4 12:14:06.936: INFO: Waiting up to 5m0s for pod "pod-e3535305-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-mwmjs" to be "success or failure" Jun 4 12:14:06.988: INFO: Pod "pod-e3535305-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.095017ms Jun 4 12:14:08.993: INFO: Pod "pod-e3535305-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056933754s Jun 4 12:14:10.997: INFO: Pod "pod-e3535305-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06148756s STEP: Saw pod success Jun 4 12:14:10.997: INFO: Pod "pod-e3535305-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:14:11.001: INFO: Trying to get logs from node hunter-worker2 pod pod-e3535305-a65c-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 12:14:11.019: INFO: Waiting for pod pod-e3535305-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:14:11.024: INFO: Pod pod-e3535305-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:14:11.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mwmjs" for this suite. Jun 4 12:14:17.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:14:17.073: INFO: namespace: e2e-tests-emptydir-mwmjs, resource: bindings, ignored listing per whitelist Jun 4 12:14:17.123: INFO: namespace e2e-tests-emptydir-mwmjs deletion completed in 6.095670379s • [SLOW TEST:10.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:14:17.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e9745541-a65c-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 12:14:17.255: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-k4knc" to be "success or failure" Jun 4 12:14:17.258: INFO: Pod "pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.252206ms Jun 4 12:14:19.263: INFO: Pod "pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007690654s Jun 4 12:14:21.265: INFO: Pod "pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010390467s STEP: Saw pod success Jun 4 12:14:21.265: INFO: Pod "pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:14:21.270: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod Jun 4 12:14:21.306: INFO: Waiting for pod pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018 to disappear Jun 4 12:14:21.334: INFO: Pod pod-projected-configmaps-e9797835-a65c-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:14:21.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k4knc" for this suite. Jun 4 12:14:27.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:14:27.434: INFO: namespace: e2e-tests-projected-k4knc, resource: bindings, ignored listing per whitelist Jun 4 12:14:27.456: INFO: namespace e2e-tests-projected-k4knc deletion completed in 6.117794656s • [SLOW TEST:10.333 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:14:27.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-rlt5l STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-rlt5l STEP: Deleting pre-stop pod Jun 4 12:14:40.602: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:14:40.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-rlt5l" for this suite. Jun 4 12:15:18.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:15:18.683: INFO: namespace: e2e-tests-prestop-rlt5l, resource: bindings, ignored listing per whitelist Jun 4 12:15:18.705: INFO: namespace e2e-tests-prestop-rlt5l deletion completed in 38.092598901s • [SLOW TEST:51.249 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:15:18.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 4 12:15:18.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-76z7t' Jun 4 12:15:18.902: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 4 12:15:18.902: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Jun 4 12:15:18.933: INFO: scanned /root for discovery docs: Jun 4 12:15:18.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-76z7t' Jun 4 12:15:34.808: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 4 12:15:34.808: INFO: stdout: "Created e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee\nScaling up e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 4 12:15:34.808: INFO: stdout: "Created e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee\nScaling up e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 4 12:15:34.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-76z7t' Jun 4 12:15:34.922: INFO: stderr: "" Jun 4 12:15:34.922: INFO: stdout: "e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee-5kntm " Jun 4 12:15:34.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee-5kntm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-76z7t' Jun 4 12:15:35.022: INFO: stderr: "" Jun 4 12:15:35.022: INFO: stdout: "true" Jun 4 12:15:35.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee-5kntm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-76z7t' Jun 4 12:15:35.121: INFO: stderr: "" Jun 4 12:15:35.121: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 4 12:15:35.121: INFO: e2e-test-nginx-rc-b89e56a4073ffa46b701c305182b17ee-5kntm is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jun 4 12:15:35.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-76z7t' Jun 4 12:15:35.239: INFO: stderr: "" Jun 4 12:15:35.239: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:15:35.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-76z7t" for this suite. Jun 4 12:15:57.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:15:57.288: INFO: namespace: e2e-tests-kubectl-76z7t, resource: bindings, ignored listing per whitelist Jun 4 12:15:57.338: INFO: namespace e2e-tests-kubectl-76z7t deletion completed in 22.095699446s • [SLOW TEST:38.633 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:15:57.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:15:57.459: INFO: Creating deployment "nginx-deployment" Jun 4 12:15:57.477: INFO: Waiting for observed generation 1 Jun 4 12:15:59.867: INFO: Waiting for all required pods to come up Jun 4 12:16:00.082: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 4 12:16:10.132: INFO: Waiting for deployment "nginx-deployment" to complete Jun 4 12:16:10.138: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 4 12:16:10.145: INFO: Updating deployment nginx-deployment Jun 4 12:16:10.145: INFO: Waiting for observed generation 2 Jun 4 12:16:12.194: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 4 12:16:12.197: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 4 12:16:12.632: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 4 12:16:13.244: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 4 12:16:13.244: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 4 12:16:13.246: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 4 12:16:13.250: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 4 12:16:13.250: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 4 12:16:13.255: INFO: Updating deployment nginx-deployment Jun 4 12:16:13.255: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 4 12:16:13.530: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 4 12:16:13.544: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 4 12:16:16.751: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mm5bt/deployments/nginx-deployment,UID:2534d3a9-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180644,Generation:3,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-06-04 12:16:13 +0000 UTC 2020-06-04 12:16:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-04 12:16:13 +0000 UTC 2020-06-04 12:15:57 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 4 12:16:17.306: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mm5bt/replicasets/nginx-deployment-5c98f8fb5,UID:2cc47133-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180630,Generation:3,CreationTimestamp:2020-06-04 12:16:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2534d3a9-a65d-11ea-99e8-0242ac110002 0xc0023b9137 0xc0023b9138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 4 12:16:17.306: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 4 12:16:17.306: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mm5bt/replicasets/nginx-deployment-85ddf47c5d,UID:2538790b-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180640,Generation:3,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2534d3a9-a65d-11ea-99e8-0242ac110002 0xc0023b9287 0xc0023b9288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 4 12:16:17.684: INFO: Pod "nginx-deployment-5c98f8fb5-2zndw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2zndw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-2zndw,UID:2ecedd7b-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180620,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc001e57d37 0xc001e57d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e57db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e57dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.684: INFO: Pod "nginx-deployment-5c98f8fb5-48t9s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-48t9s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-48t9s,UID:2ecee720-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180706,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc001e57e47 0xc001e57e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e57ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e57ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.684: INFO: Pod "nginx-deployment-5c98f8fb5-764cr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-764cr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-764cr,UID:2eceebfd-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180618,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc001e57fa7 0xc001e57fa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fc4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fc550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.684: INFO: Pod "nginx-deployment-5c98f8fb5-8xjvd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8xjvd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-8xjvd,UID:2cc555b4-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180534,Generation:0,CreationTimestamp:2020-06-04 12:16:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fc5c7 0xc0024fc5c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fc640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fc660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.684: INFO: Pod "nginx-deployment-5c98f8fb5-f7g98" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f7g98,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-f7g98,UID:2ecb4cf2-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180645,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fc967 0xc0024fc968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fcae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fcb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-l6pg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l6pg4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-l6pg4,UID:2ecb2c2c-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180649,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fcc47 0xc0024fcc48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fcd00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fd050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-lzpmz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lzpmz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-lzpmz,UID:2cef3336-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180561,Generation:0,CreationTimestamp:2020-06-04 12:16:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fd187 0xc0024fd188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fd200} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fd230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-pdwf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pdwf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-pdwf8,UID:2ed6dc73-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180628,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fd667 0xc0024fd668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fd6f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fdac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-qp6mf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qp6mf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-qp6mf,UID:2ecee807-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180622,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fdb87 0xc0024fdb88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fdc00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fdc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-rsh8w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rsh8w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-rsh8w,UID:2cc654ee-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180549,Generation:0,CreationTimestamp:2020-06-04 12:16:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fde47 0xc0024fde48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024fdec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024fdee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-s8glw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s8glw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-s8glw,UID:2cebd38d-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180558,Generation:0,CreationTimestamp:2020-06-04 12:16:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024fdfa7 0xc0024fdfa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002472050} {node.kubernetes.io/unreachable Exists NoExecute 0xc002472070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.685: INFO: Pod "nginx-deployment-5c98f8fb5-sfvdf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sfvdf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-sfvdf,UID:2ec9391a-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180641,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc002472137 0xc002472138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024721b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024721d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-5c98f8fb5-vzkdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vzkdb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-5c98f8fb5-vzkdb,UID:2cc64e9a-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180540,Generation:0,CreationTimestamp:2020-06-04 12:16:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2cc47133-a65d-11ea-99e8-0242ac110002 0xc0024722a7 0xc0024722a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002472320} {node.kubernetes.io/unreachable Exists NoExecute 0xc002472340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-2862g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2862g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-2862g,UID:2540468f-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180448,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc002472417 0xc002472418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002472490} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024724b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.122,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d8f277bb3fc6bb7a139584184927b0056fc1e82a5764d31f81530592490e19c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-2n7pz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2n7pz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-2n7pz,UID:25412483-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180498,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc002472587 0xc002472588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002472600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002472620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.124,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3bc18e2d916587d84383117b2c408565d882e2f1176c1483d4806c907724bf44}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-678gv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-678gv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-678gv,UID:2ecb4567-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180651,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024728f7 0xc0024728f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002472990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024729b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-7qqvx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7qqvx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-7qqvx,UID:25411d85-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180482,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc002473247 0xc002473248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024732c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024732e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.171,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0dbba7ee917c36a3793fca56de0cc82a3d9c62cbc4607f4c30e1c04788db5fda}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-7tbr9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7tbr9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-7tbr9,UID:2ecec3c9-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180697,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc002473477 0xc002473478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024734f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002473590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-87k8l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-87k8l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-87k8l,UID:2ec9339d-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180631,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bc0f7 0xc0024bc0f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bc170} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bc190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.686: INFO: Pod "nginx-deployment-85ddf47c5d-89vf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-89vf5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-89vf5,UID:2ec91e6d-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180638,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bc247 0xc0024bc248}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bc380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bc3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-ccfgs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ccfgs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-ccfgs,UID:2ece965c-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180691,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bc457 0xc0024bc458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bc4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bc4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-cwcjd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cwcjd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-cwcjd,UID:253fe25e-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180452,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bc5a7 0xc0024bc5a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bc620} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bc640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.169,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://edd8960643d9b6bfbd327d5d175e766aabe4fc2d00f182cf82288ed882207392}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-j5mfs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j5mfs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-j5mfs,UID:2548e7fa-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180495,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bca07 0xc0024bca08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bca80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bcaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.125,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d26e77da2d9cbd4b45e9c5ff98badc261a0f5e4132109ca861c93b5389c4d123}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-nxc62" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nxc62,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-nxc62,UID:2ecb34a4-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180685,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bcd07 0xc0024bcd08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bcd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bcda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-ptjnf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ptjnf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-ptjnf,UID:2ecec226-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180617,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bcec7 0xc0024bcec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bcf40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bcf60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-qsj4d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qsj4d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-qsj4d,UID:2ecb42dd-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180692,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bd047 0xc0024bd048}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bd0c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bd0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-s5rf8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s5rf8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-s5rf8,UID:2ecb533b-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180673,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bd197 0xc0024bd198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bd290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bd340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-sbp48" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sbp48,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-sbp48,UID:2ececb79-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180623,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bd477 0xc0024bd478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bd550} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bd570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.687: INFO: Pod "nginx-deployment-85ddf47c5d-vsx9m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vsx9m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-vsx9m,UID:2eacda16-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180626,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bd5e7 0xc0024bd5e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bd710} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bd730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.688: INFO: Pod "nginx-deployment-85ddf47c5d-xrkk9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xrkk9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-xrkk9,UID:2ecea560-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180704,Generation:0,CreationTimestamp:2020-06-04 12:16:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bd7e7 0xc0024bd7e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bd860} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bd8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:13 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-04 12:16:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.688: INFO: Pod "nginx-deployment-85ddf47c5d-xw97m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xw97m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-xw97m,UID:2548fc3c-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180492,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bd9a7 0xc0024bd9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bda20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bdaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.126,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c5197e587520e988524f2b79c7b69a671210477bb244f68ec8507574f48f54d7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.688: INFO: Pod "nginx-deployment-85ddf47c5d-xzz8v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xzz8v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-xzz8v,UID:2541150c-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180462,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bdb67 0xc0024bdb68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bdbe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bdc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.123,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:04 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b51b15de6117097113de72562302fd5c162b9675a1c1a46103e30318818b4d83}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 4 12:16:17.688: INFO: Pod "nginx-deployment-85ddf47c5d-z6qk7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z6qk7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-mm5bt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-mm5bt/pods/nginx-deployment-85ddf47c5d-z6qk7,UID:25405003-a65d-11ea-99e8-0242ac110002,ResourceVersion:14180471,Generation:0,CreationTimestamp:2020-06-04 12:15:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2538790b-a65d-11ea-99e8-0242ac110002 0xc0024bdcc7 0xc0024bdcc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cfp44 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfp44,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-cfp44 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024bdd40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024bdd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:16:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-04 12:15:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.170,StartTime:2020-06-04 12:15:57 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-04 12:16:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://620dab974d4437fb6de9719702df0f42241daa02f305482f0ea9af2ebc2f56a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:16:17.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mm5bt" for this suite. Jun 4 12:16:38.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:16:38.525: INFO: namespace: e2e-tests-deployment-mm5bt, resource: bindings, ignored listing per whitelist Jun 4 12:16:38.527: INFO: namespace e2e-tests-deployment-mm5bt deletion completed in 20.318581874s • [SLOW TEST:41.188 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:16:38.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 4 12:16:43.155: INFO: Successfully updated pod "labelsupdate3dbc4121-a65d-11ea-86dc-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:16:45.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pcs4t" for this suite. Jun 4 12:17:07.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:17:07.218: INFO: namespace: e2e-tests-projected-pcs4t, resource: bindings, ignored listing per whitelist Jun 4 12:17:07.286: INFO: namespace e2e-tests-projected-pcs4t deletion completed in 22.092897935s • [SLOW TEST:28.760 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:17:07.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:17:07.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-4hr4x" to be "success or failure" Jun 4 12:17:07.428: INFO: Pod "downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517527ms Jun 4 12:17:09.432: INFO: Pod "downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007377843s Jun 4 12:17:11.436: INFO: Pod "downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011430625s STEP: Saw pod success Jun 4 12:17:11.436: INFO: Pod "downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:17:11.438: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:17:11.497: INFO: Waiting for pod downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018 to disappear Jun 4 12:17:11.506: INFO: Pod downwardapi-volume-4ee5fba1-a65d-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:17:11.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4hr4x" for this suite. Jun 4 12:17:17.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:17:17.558: INFO: namespace: e2e-tests-projected-4hr4x, resource: bindings, ignored listing per whitelist Jun 4 12:17:17.595: INFO: namespace e2e-tests-projected-4hr4x deletion completed in 6.084826255s • [SLOW TEST:10.309 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:17:17.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 4 12:17:17.679: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:17:26.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-mc257" for this suite. Jun 4 12:17:48.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:17:48.758: INFO: namespace: e2e-tests-init-container-mc257, resource: bindings, ignored listing per whitelist Jun 4 12:17:48.803: INFO: namespace e2e-tests-init-container-mc257 deletion completed in 22.087588172s • [SLOW TEST:31.207 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:17:48.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:17:53.022: INFO: Waiting up to 5m0s for pod "client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018" in namespace "e2e-tests-pods-7pmhg" to be "success or failure" Jun 4 12:17:53.095: INFO: Pod "client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 73.11215ms Jun 4 12:17:55.098: INFO: Pod "client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076720284s Jun 4 12:17:57.102: INFO: Pod "client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080586404s STEP: Saw pod success Jun 4 12:17:57.102: INFO: Pod "client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:17:57.105: INFO: Trying to get logs from node hunter-worker pod client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018 container env3cont: STEP: delete the pod Jun 4 12:17:57.257: INFO: Waiting for pod client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018 to disappear Jun 4 12:17:57.274: INFO: Pod client-envvars-6a10e543-a65d-11ea-86dc-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:17:57.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7pmhg" for this suite. Jun 4 12:18:47.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:18:47.339: INFO: namespace: e2e-tests-pods-7pmhg, resource: bindings, ignored listing per whitelist Jun 4 12:18:47.371: INFO: namespace e2e-tests-pods-7pmhg deletion completed in 50.094463206s • [SLOW TEST:58.568 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:18:47.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8a845e16-a65d-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 12:18:47.484: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-xdx62" to be "success or failure" Jun 4 12:18:47.490: INFO: Pod "pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396737ms Jun 4 12:18:49.503: INFO: Pod "pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018989292s Jun 4 12:18:51.506: INFO: Pod "pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021891536s STEP: Saw pod success Jun 4 12:18:51.506: INFO: Pod "pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:18:51.526: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 12:18:51.565: INFO: Waiting for pod pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018 to disappear Jun 4 12:18:51.574: INFO: Pod pod-configmaps-8a8b7d5a-a65d-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:18:51.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xdx62" for this suite. Jun 4 12:18:57.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:18:57.682: INFO: namespace: e2e-tests-configmap-xdx62, resource: bindings, ignored listing per whitelist Jun 4 12:18:57.688: INFO: namespace e2e-tests-configmap-xdx62 deletion completed in 6.111159499s • [SLOW TEST:10.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:18:57.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 4 12:18:58.589: INFO: Pod name wrapped-volume-race-91268cb5-a65d-11ea-86dc-0242ac110018: Found 0 pods out of 5 Jun 4 12:19:03.596: INFO: Pod name wrapped-volume-race-91268cb5-a65d-11ea-86dc-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-91268cb5-a65d-11ea-86dc-0242ac110018 in namespace e2e-tests-emptydir-wrapper-wp546, will wait for the garbage collector to delete the pods Jun 4 12:20:55.722: INFO: Deleting ReplicationController wrapped-volume-race-91268cb5-a65d-11ea-86dc-0242ac110018 took: 7.438963ms Jun 4 12:20:55.822: INFO: Terminating ReplicationController wrapped-volume-race-91268cb5-a65d-11ea-86dc-0242ac110018 pods took: 100.21708ms STEP: Creating RC which spawns configmap-volume pods Jun 4 12:21:41.771: INFO: Pod name wrapped-volume-race-f2671451-a65d-11ea-86dc-0242ac110018: Found 0 pods out of 5 Jun 4 12:21:46.781: INFO: Pod name wrapped-volume-race-f2671451-a65d-11ea-86dc-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f2671451-a65d-11ea-86dc-0242ac110018 in namespace e2e-tests-emptydir-wrapper-wp546, will wait for the garbage collector to delete the pods Jun 4 12:24:22.868: INFO: Deleting ReplicationController wrapped-volume-race-f2671451-a65d-11ea-86dc-0242ac110018 took: 6.192749ms Jun 4 12:24:22.968: INFO: Terminating ReplicationController wrapped-volume-race-f2671451-a65d-11ea-86dc-0242ac110018 pods took: 100.284314ms STEP: Creating RC which spawns configmap-volume pods Jun 4 12:25:02.398: INFO: Pod name wrapped-volume-race-69ff3973-a65e-11ea-86dc-0242ac110018: Found 0 pods out of 5 Jun 4 12:25:07.406: INFO: Pod name wrapped-volume-race-69ff3973-a65e-11ea-86dc-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-69ff3973-a65e-11ea-86dc-0242ac110018 in namespace e2e-tests-emptydir-wrapper-wp546, will wait for the garbage collector to delete the pods Jun 4 12:27:43.499: INFO: Deleting ReplicationController wrapped-volume-race-69ff3973-a65e-11ea-86dc-0242ac110018 took: 8.442433ms Jun 4 12:27:43.599: INFO: Terminating ReplicationController wrapped-volume-race-69ff3973-a65e-11ea-86dc-0242ac110018 pods took: 100.356343ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:28:22.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-wp546" for this suite. Jun 4 12:28:30.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:28:30.351: INFO: namespace: e2e-tests-emptydir-wrapper-wp546, resource: bindings, ignored listing per whitelist Jun 4 12:28:30.419: INFO: namespace e2e-tests-emptydir-wrapper-wp546 deletion completed in 8.131874726s • [SLOW TEST:572.731 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:28:30.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jun 4 12:28:30.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 4 12:28:30.735: INFO: stderr: "" Jun 4 12:28:30.735: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:28:30.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lszn9" for this suite. Jun 4 12:28:36.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:28:36.799: INFO: namespace: e2e-tests-kubectl-lszn9, resource: bindings, ignored listing per whitelist Jun 4 12:28:36.842: INFO: namespace e2e-tests-kubectl-lszn9 deletion completed in 6.101832016s • [SLOW TEST:6.421 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:28:36.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:28:36.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-h6g8z" to be "success or failure" Jun 4 12:28:36.942: INFO: Pod "downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.624566ms Jun 4 12:28:39.099: INFO: Pod "downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160506012s Jun 4 12:28:41.104: INFO: Pod "downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.164868259s Jun 4 12:28:43.108: INFO: Pod "downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169590582s STEP: Saw pod success Jun 4 12:28:43.108: INFO: Pod "downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:28:43.112: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:28:43.189: INFO: Waiting for pod downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018 to disappear Jun 4 12:28:43.204: INFO: Pod downwardapi-volume-e9e20f3b-a65e-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:28:43.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h6g8z" for this suite. Jun 4 12:28:49.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:28:49.303: INFO: namespace: e2e-tests-projected-h6g8z, resource: bindings, ignored listing per whitelist Jun 4 12:28:49.331: INFO: namespace e2e-tests-projected-h6g8z deletion completed in 6.123732207s • [SLOW TEST:12.490 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:28:49.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:28:49.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018" in namespace "e2e-tests-projected-zc8ts" to be "success or failure" Jun 4 12:28:49.534: INFO: Pod "downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.110122ms Jun 4 12:28:51.566: INFO: Pod "downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05728743s Jun 4 12:28:53.570: INFO: Pod "downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061490626s STEP: Saw pod success Jun 4 12:28:53.570: INFO: Pod "downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:28:53.574: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:28:53.607: INFO: Waiting for pod downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018 to disappear Jun 4 12:28:53.617: INFO: Pod downwardapi-volume-f15475fa-a65e-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:28:53.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zc8ts" for this suite. Jun 4 12:28:59.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:28:59.645: INFO: namespace: e2e-tests-projected-zc8ts, resource: bindings, ignored listing per whitelist Jun 4 12:28:59.712: INFO: namespace e2e-tests-projected-zc8ts deletion completed in 6.091133901s • [SLOW TEST:10.380 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:28:59.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 4 12:28:59.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018" in namespace "e2e-tests-downward-api-zcmpq" to be "success or failure" Jun 4 12:28:59.859: INFO: Pod "downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.658645ms Jun 4 12:29:01.863: INFO: Pod "downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049443415s Jun 4 12:29:03.867: INFO: Pod "downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053833948s STEP: Saw pod success Jun 4 12:29:03.867: INFO: Pod "downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:29:03.871: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018 container client-container: STEP: delete the pod Jun 4 12:29:03.915: INFO: Waiting for pod downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018 to disappear Jun 4 12:29:03.919: INFO: Pod downwardapi-volume-f7843489-a65e-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:29:03.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zcmpq" for this suite. Jun 4 12:29:09.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:29:10.026: INFO: namespace: e2e-tests-downward-api-zcmpq, resource: bindings, ignored listing per whitelist Jun 4 12:29:10.056: INFO: namespace e2e-tests-downward-api-zcmpq deletion completed in 6.133777515s • [SLOW TEST:10.344 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:29:10.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-bdxh5 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-bdxh5 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-bdxh5 Jun 4 12:29:10.228: INFO: Found 0 stateful pods, waiting for 1 Jun 4 12:29:20.233: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 4 12:29:20.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 12:29:20.490: INFO: stderr: "I0604 12:29:20.365362 3709 log.go:172] (0xc0008742c0) (0xc0005c9360) Create stream\nI0604 12:29:20.365450 3709 log.go:172] (0xc0008742c0) (0xc0005c9360) Stream added, broadcasting: 1\nI0604 12:29:20.368015 3709 log.go:172] (0xc0008742c0) Reply frame received for 1\nI0604 12:29:20.368103 3709 log.go:172] (0xc0008742c0) (0xc00058e000) Create stream\nI0604 12:29:20.368134 3709 log.go:172] (0xc0008742c0) (0xc00058e000) Stream added, broadcasting: 3\nI0604 12:29:20.369324 3709 log.go:172] (0xc0008742c0) Reply frame received for 3\nI0604 12:29:20.369374 3709 log.go:172] (0xc0008742c0) (0xc0005c9400) Create stream\nI0604 12:29:20.369385 3709 log.go:172] (0xc0008742c0) (0xc0005c9400) Stream added, broadcasting: 5\nI0604 12:29:20.370476 3709 log.go:172] (0xc0008742c0) Reply frame received for 5\nI0604 12:29:20.480250 3709 log.go:172] (0xc0008742c0) Data frame received for 3\nI0604 12:29:20.480299 3709 log.go:172] (0xc00058e000) (3) Data frame handling\nI0604 12:29:20.480327 3709 log.go:172] (0xc00058e000) (3) Data frame sent\nI0604 12:29:20.480346 3709 log.go:172] (0xc0008742c0) Data frame received for 3\nI0604 12:29:20.480363 3709 log.go:172] (0xc00058e000) (3) Data frame handling\nI0604 12:29:20.480466 3709 log.go:172] (0xc0008742c0) Data frame received for 5\nI0604 12:29:20.480504 3709 log.go:172] (0xc0005c9400) (5) Data frame handling\nI0604 12:29:20.483026 3709 log.go:172] (0xc0008742c0) Data frame received for 1\nI0604 12:29:20.483060 3709 log.go:172] (0xc0005c9360) (1) Data frame handling\nI0604 12:29:20.483092 3709 log.go:172] (0xc0005c9360) (1) Data frame sent\nI0604 12:29:20.483134 3709 log.go:172] (0xc0008742c0) (0xc0005c9360) Stream removed, broadcasting: 1\nI0604 12:29:20.483160 3709 log.go:172] (0xc0008742c0) Go away received\nI0604 12:29:20.483376 3709 log.go:172] (0xc0008742c0) (0xc0005c9360) Stream removed, broadcasting: 1\nI0604 12:29:20.483409 3709 log.go:172] (0xc0008742c0) (0xc00058e000) Stream removed, broadcasting: 3\nI0604 12:29:20.483425 3709 log.go:172] (0xc0008742c0) (0xc0005c9400) Stream removed, broadcasting: 5\n" Jun 4 12:29:20.491: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 12:29:20.491: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 12:29:20.508: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 4 12:29:30.513: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 4 12:29:30.513: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 12:29:30.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999521s Jun 4 12:29:31.541: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986732022s Jun 4 12:29:32.554: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981834282s Jun 4 12:29:33.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.96894226s Jun 4 12:29:34.567: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.96460451s Jun 4 12:29:35.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.956587192s Jun 4 12:29:36.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951979033s Jun 4 12:29:37.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.929104446s Jun 4 12:29:38.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.925107678s Jun 4 12:29:39.631: INFO: Verifying statefulset ss doesn't scale past 1 for another 896.839747ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-bdxh5 Jun 4 12:29:40.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 12:29:40.904: INFO: stderr: "I0604 12:29:40.794116 3732 log.go:172] (0xc000138a50) (0xc0008815e0) Create stream\nI0604 12:29:40.794220 3732 log.go:172] (0xc000138a50) (0xc0008815e0) Stream added, broadcasting: 1\nI0604 12:29:40.797470 3732 log.go:172] (0xc000138a50) Reply frame received for 1\nI0604 12:29:40.797517 3732 log.go:172] (0xc000138a50) (0xc000312960) Create stream\nI0604 12:29:40.797529 3732 log.go:172] (0xc000138a50) (0xc000312960) Stream added, broadcasting: 3\nI0604 12:29:40.798376 3732 log.go:172] (0xc000138a50) Reply frame received for 3\nI0604 12:29:40.798396 3732 log.go:172] (0xc000138a50) (0xc000312a00) Create stream\nI0604 12:29:40.798402 3732 log.go:172] (0xc000138a50) (0xc000312a00) Stream added, broadcasting: 5\nI0604 12:29:40.799108 3732 log.go:172] (0xc000138a50) Reply frame received for 5\nI0604 12:29:40.898937 3732 log.go:172] (0xc000138a50) Data frame received for 5\nI0604 12:29:40.898966 3732 log.go:172] (0xc000312a00) (5) Data frame handling\nI0604 12:29:40.898986 3732 log.go:172] (0xc000138a50) Data frame received for 3\nI0604 12:29:40.898991 3732 log.go:172] (0xc000312960) (3) Data frame handling\nI0604 12:29:40.898997 3732 log.go:172] (0xc000312960) (3) Data frame sent\nI0604 12:29:40.899002 3732 log.go:172] (0xc000138a50) Data frame received for 3\nI0604 12:29:40.899006 3732 log.go:172] (0xc000312960) (3) Data frame handling\nI0604 12:29:40.900092 3732 log.go:172] (0xc000138a50) Data frame received for 1\nI0604 12:29:40.900106 3732 log.go:172] (0xc0008815e0) (1) Data frame handling\nI0604 12:29:40.900116 3732 log.go:172] (0xc0008815e0) (1) Data frame sent\nI0604 12:29:40.900126 3732 log.go:172] (0xc000138a50) (0xc0008815e0) Stream removed, broadcasting: 1\nI0604 12:29:40.900140 3732 log.go:172] (0xc000138a50) Go away received\nI0604 12:29:40.900374 3732 log.go:172] (0xc000138a50) (0xc0008815e0) Stream removed, broadcasting: 1\nI0604 12:29:40.900397 3732 log.go:172] (0xc000138a50) (0xc000312960) Stream removed, broadcasting: 3\nI0604 12:29:40.900407 3732 log.go:172] (0xc000138a50) (0xc000312a00) Stream removed, broadcasting: 5\n" Jun 4 12:29:40.904: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 12:29:40.904: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 12:29:40.908: INFO: Found 1 stateful pods, waiting for 3 Jun 4 12:29:50.914: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 4 12:29:50.914: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 4 12:29:50.914: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 4 12:29:50.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 12:29:51.132: INFO: stderr: "I0604 12:29:51.044826 3755 log.go:172] (0xc00080a2c0) (0xc000702640) Create stream\nI0604 12:29:51.044905 3755 log.go:172] (0xc00080a2c0) (0xc000702640) Stream added, broadcasting: 1\nI0604 12:29:51.047614 3755 log.go:172] (0xc00080a2c0) Reply frame received for 1\nI0604 12:29:51.047693 3755 log.go:172] (0xc00080a2c0) (0xc0007026e0) Create stream\nI0604 12:29:51.047720 3755 log.go:172] (0xc00080a2c0) (0xc0007026e0) Stream added, broadcasting: 3\nI0604 12:29:51.048705 3755 log.go:172] (0xc00080a2c0) Reply frame received for 3\nI0604 12:29:51.048751 3755 log.go:172] (0xc00080a2c0) (0xc000702780) Create stream\nI0604 12:29:51.048772 3755 log.go:172] (0xc00080a2c0) (0xc000702780) Stream added, broadcasting: 5\nI0604 12:29:51.049831 3755 log.go:172] (0xc00080a2c0) Reply frame received for 5\nI0604 12:29:51.126743 3755 log.go:172] (0xc00080a2c0) Data frame received for 5\nI0604 12:29:51.126787 3755 log.go:172] (0xc000702780) (5) Data frame handling\nI0604 12:29:51.126826 3755 log.go:172] (0xc00080a2c0) Data frame received for 3\nI0604 12:29:51.126863 3755 log.go:172] (0xc0007026e0) (3) Data frame handling\nI0604 12:29:51.126885 3755 log.go:172] (0xc0007026e0) (3) Data frame sent\nI0604 12:29:51.126897 3755 log.go:172] (0xc00080a2c0) Data frame received for 3\nI0604 12:29:51.126907 3755 log.go:172] (0xc0007026e0) (3) Data frame handling\nI0604 12:29:51.128630 3755 log.go:172] (0xc00080a2c0) Data frame received for 1\nI0604 12:29:51.128655 3755 log.go:172] (0xc000702640) (1) Data frame handling\nI0604 12:29:51.128677 3755 log.go:172] (0xc000702640) (1) Data frame sent\nI0604 12:29:51.128694 3755 log.go:172] (0xc00080a2c0) (0xc000702640) Stream removed, broadcasting: 1\nI0604 12:29:51.128791 3755 log.go:172] (0xc00080a2c0) Go away received\nI0604 12:29:51.128874 3755 log.go:172] (0xc00080a2c0) (0xc000702640) Stream removed, broadcasting: 1\nI0604 12:29:51.128895 3755 log.go:172] (0xc00080a2c0) (0xc0007026e0) Stream removed, broadcasting: 3\nI0604 12:29:51.128906 3755 log.go:172] (0xc00080a2c0) (0xc000702780) Stream removed, broadcasting: 5\n" Jun 4 12:29:51.133: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 12:29:51.133: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 12:29:51.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 12:29:51.366: INFO: stderr: "I0604 12:29:51.254069 3776 log.go:172] (0xc00089a2c0) (0xc0005ef4a0) Create stream\nI0604 12:29:51.254127 3776 log.go:172] (0xc00089a2c0) (0xc0005ef4a0) Stream added, broadcasting: 1\nI0604 12:29:51.256485 3776 log.go:172] (0xc00089a2c0) Reply frame received for 1\nI0604 12:29:51.256622 3776 log.go:172] (0xc00089a2c0) (0xc0000ee000) Create stream\nI0604 12:29:51.256662 3776 log.go:172] (0xc00089a2c0) (0xc0000ee000) Stream added, broadcasting: 3\nI0604 12:29:51.258318 3776 log.go:172] (0xc00089a2c0) Reply frame received for 3\nI0604 12:29:51.258394 3776 log.go:172] (0xc00089a2c0) (0xc0005ef540) Create stream\nI0604 12:29:51.258411 3776 log.go:172] (0xc00089a2c0) (0xc0005ef540) Stream added, broadcasting: 5\nI0604 12:29:51.259666 3776 log.go:172] (0xc00089a2c0) Reply frame received for 5\nI0604 12:29:51.358928 3776 log.go:172] (0xc00089a2c0) Data frame received for 3\nI0604 12:29:51.359112 3776 log.go:172] (0xc0000ee000) (3) Data frame handling\nI0604 12:29:51.359228 3776 log.go:172] (0xc0000ee000) (3) Data frame sent\nI0604 12:29:51.359386 3776 log.go:172] (0xc00089a2c0) Data frame received for 3\nI0604 12:29:51.359403 3776 log.go:172] (0xc0000ee000) (3) Data frame handling\nI0604 12:29:51.359447 3776 log.go:172] (0xc00089a2c0) Data frame received for 5\nI0604 12:29:51.359495 3776 log.go:172] (0xc0005ef540) (5) Data frame handling\nI0604 12:29:51.361022 3776 log.go:172] (0xc00089a2c0) Data frame received for 1\nI0604 12:29:51.361045 3776 log.go:172] (0xc0005ef4a0) (1) Data frame handling\nI0604 12:29:51.361078 3776 log.go:172] (0xc0005ef4a0) (1) Data frame sent\nI0604 12:29:51.361393 3776 log.go:172] (0xc00089a2c0) (0xc0005ef4a0) Stream removed, broadcasting: 1\nI0604 12:29:51.361429 3776 log.go:172] (0xc00089a2c0) Go away received\nI0604 12:29:51.361739 3776 log.go:172] (0xc00089a2c0) (0xc0005ef4a0) Stream removed, broadcasting: 1\nI0604 12:29:51.361778 3776 log.go:172] (0xc00089a2c0) (0xc0000ee000) Stream removed, broadcasting: 3\nI0604 12:29:51.361802 3776 log.go:172] (0xc00089a2c0) (0xc0005ef540) Stream removed, broadcasting: 5\n" Jun 4 12:29:51.366: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 12:29:51.366: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 12:29:51.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 4 12:29:51.602: INFO: stderr: "I0604 12:29:51.496142 3799 log.go:172] (0xc00080e2c0) (0xc000720640) Create stream\nI0604 12:29:51.496199 3799 log.go:172] (0xc00080e2c0) (0xc000720640) Stream added, broadcasting: 1\nI0604 12:29:51.498921 3799 log.go:172] (0xc00080e2c0) Reply frame received for 1\nI0604 12:29:51.498979 3799 log.go:172] (0xc00080e2c0) (0xc0007206e0) Create stream\nI0604 12:29:51.498995 3799 log.go:172] (0xc00080e2c0) (0xc0007206e0) Stream added, broadcasting: 3\nI0604 12:29:51.500108 3799 log.go:172] (0xc00080e2c0) Reply frame received for 3\nI0604 12:29:51.500152 3799 log.go:172] (0xc00080e2c0) (0xc000720780) Create stream\nI0604 12:29:51.500166 3799 log.go:172] (0xc00080e2c0) (0xc000720780) Stream added, broadcasting: 5\nI0604 12:29:51.501030 3799 log.go:172] (0xc00080e2c0) Reply frame received for 5\nI0604 12:29:51.595553 3799 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0604 12:29:51.595593 3799 log.go:172] (0xc0007206e0) (3) Data frame handling\nI0604 12:29:51.595622 3799 log.go:172] (0xc0007206e0) (3) Data frame sent\nI0604 12:29:51.595943 3799 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0604 12:29:51.595979 3799 log.go:172] (0xc0007206e0) (3) Data frame handling\nI0604 12:29:51.596013 3799 log.go:172] (0xc00080e2c0) Data frame received for 5\nI0604 12:29:51.596037 3799 log.go:172] (0xc000720780) (5) Data frame handling\nI0604 12:29:51.597940 3799 log.go:172] (0xc00080e2c0) Data frame received for 1\nI0604 12:29:51.597969 3799 log.go:172] (0xc000720640) (1) Data frame handling\nI0604 12:29:51.597997 3799 log.go:172] (0xc000720640) (1) Data frame sent\nI0604 12:29:51.598016 3799 log.go:172] (0xc00080e2c0) (0xc000720640) Stream removed, broadcasting: 1\nI0604 12:29:51.598124 3799 log.go:172] (0xc00080e2c0) Go away received\nI0604 12:29:51.598228 3799 log.go:172] (0xc00080e2c0) (0xc000720640) Stream removed, broadcasting: 1\nI0604 12:29:51.598259 3799 log.go:172] (0xc00080e2c0) (0xc0007206e0) Stream removed, broadcasting: 3\nI0604 12:29:51.598275 3799 log.go:172] (0xc00080e2c0) (0xc000720780) Stream removed, broadcasting: 5\n" Jun 4 12:29:51.602: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 4 12:29:51.602: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 4 12:29:51.602: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 12:29:51.605: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 4 12:30:01.614: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 4 12:30:01.614: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 4 12:30:01.614: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 4 12:30:01.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999691s Jun 4 12:30:02.651: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975635139s Jun 4 12:30:03.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970291914s Jun 4 12:30:04.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.951724335s Jun 4 12:30:05.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.946638651s Jun 4 12:30:06.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933548782s Jun 4 12:30:07.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.913237666s Jun 4 12:30:08.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.908038445s Jun 4 12:30:09.741: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.903085289s Jun 4 12:30:10.746: INFO: Verifying statefulset ss doesn't scale past 3 for another 880.198062ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-bdxh5 Jun 4 12:30:11.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 12:30:11.985: INFO: stderr: "I0604 12:30:11.899006 3821 log.go:172] (0xc000138840) (0xc00076e640) Create stream\nI0604 12:30:11.899060 3821 log.go:172] (0xc000138840) (0xc00076e640) Stream added, broadcasting: 1\nI0604 12:30:11.901294 3821 log.go:172] (0xc000138840) Reply frame received for 1\nI0604 12:30:11.901334 3821 log.go:172] (0xc000138840) (0xc000656e60) Create stream\nI0604 12:30:11.901345 3821 log.go:172] (0xc000138840) (0xc000656e60) Stream added, broadcasting: 3\nI0604 12:30:11.902186 3821 log.go:172] (0xc000138840) Reply frame received for 3\nI0604 12:30:11.902258 3821 log.go:172] (0xc000138840) (0xc000730000) Create stream\nI0604 12:30:11.902302 3821 log.go:172] (0xc000138840) (0xc000730000) Stream added, broadcasting: 5\nI0604 12:30:11.903364 3821 log.go:172] (0xc000138840) Reply frame received for 5\nI0604 12:30:11.976989 3821 log.go:172] (0xc000138840) Data frame received for 5\nI0604 12:30:11.977071 3821 log.go:172] (0xc000138840) Data frame received for 3\nI0604 12:30:11.977104 3821 log.go:172] (0xc000656e60) (3) Data frame handling\nI0604 12:30:11.977502 3821 log.go:172] (0xc000656e60) (3) Data frame sent\nI0604 12:30:11.977520 3821 log.go:172] (0xc000138840) Data frame received for 3\nI0604 12:30:11.977529 3821 log.go:172] (0xc000656e60) (3) Data frame handling\nI0604 12:30:11.977541 3821 log.go:172] (0xc000730000) (5) Data frame handling\nI0604 12:30:11.979157 3821 log.go:172] (0xc000138840) Data frame received for 1\nI0604 12:30:11.979194 3821 log.go:172] (0xc00076e640) (1) Data frame handling\nI0604 12:30:11.979223 3821 log.go:172] (0xc00076e640) (1) Data frame sent\nI0604 12:30:11.979406 3821 log.go:172] (0xc000138840) (0xc00076e640) Stream removed, broadcasting: 1\nI0604 12:30:11.979659 3821 log.go:172] (0xc000138840) Go away received\nI0604 12:30:11.979718 3821 log.go:172] (0xc000138840) (0xc00076e640) Stream removed, broadcasting: 1\nI0604 12:30:11.979750 3821 log.go:172] (0xc000138840) (0xc000656e60) Stream removed, broadcasting: 3\nI0604 12:30:11.979855 3821 log.go:172] (0xc000138840) (0xc000730000) Stream removed, broadcasting: 5\n" Jun 4 12:30:11.985: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 12:30:11.985: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 12:30:11.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 12:30:12.192: INFO: stderr: "I0604 12:30:12.109818 3843 log.go:172] (0xc00015e840) (0xc0005bb360) Create stream\nI0604 12:30:12.109887 3843 log.go:172] (0xc00015e840) (0xc0005bb360) Stream added, broadcasting: 1\nI0604 12:30:12.112178 3843 log.go:172] (0xc00015e840) Reply frame received for 1\nI0604 12:30:12.112225 3843 log.go:172] (0xc00015e840) (0xc000702000) Create stream\nI0604 12:30:12.112242 3843 log.go:172] (0xc00015e840) (0xc000702000) Stream added, broadcasting: 3\nI0604 12:30:12.113017 3843 log.go:172] (0xc00015e840) Reply frame received for 3\nI0604 12:30:12.113055 3843 log.go:172] (0xc00015e840) (0xc0007020a0) Create stream\nI0604 12:30:12.113065 3843 log.go:172] (0xc00015e840) (0xc0007020a0) Stream added, broadcasting: 5\nI0604 12:30:12.113916 3843 log.go:172] (0xc00015e840) Reply frame received for 5\nI0604 12:30:12.186457 3843 log.go:172] (0xc00015e840) Data frame received for 5\nI0604 12:30:12.186483 3843 log.go:172] (0xc0007020a0) (5) Data frame handling\nI0604 12:30:12.186500 3843 log.go:172] (0xc00015e840) Data frame received for 3\nI0604 12:30:12.186505 3843 log.go:172] (0xc000702000) (3) Data frame handling\nI0604 12:30:12.186510 3843 log.go:172] (0xc000702000) (3) Data frame sent\nI0604 12:30:12.186737 3843 log.go:172] (0xc00015e840) Data frame received for 3\nI0604 12:30:12.186766 3843 log.go:172] (0xc000702000) (3) Data frame handling\nI0604 12:30:12.188384 3843 log.go:172] (0xc00015e840) Data frame received for 1\nI0604 12:30:12.188400 3843 log.go:172] (0xc0005bb360) (1) Data frame handling\nI0604 12:30:12.188406 3843 log.go:172] (0xc0005bb360) (1) Data frame sent\nI0604 12:30:12.188414 3843 log.go:172] (0xc00015e840) (0xc0005bb360) Stream removed, broadcasting: 1\nI0604 12:30:12.188423 3843 log.go:172] (0xc00015e840) Go away received\nI0604 12:30:12.188554 3843 log.go:172] (0xc00015e840) (0xc0005bb360) Stream removed, broadcasting: 1\nI0604 12:30:12.188565 3843 log.go:172] (0xc00015e840) (0xc000702000) Stream removed, broadcasting: 3\nI0604 12:30:12.188570 3843 log.go:172] (0xc00015e840) (0xc0007020a0) Stream removed, broadcasting: 5\n" Jun 4 12:30:12.192: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 12:30:12.192: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 12:30:12.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-bdxh5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 4 12:30:12.396: INFO: stderr: "I0604 12:30:12.324574 3865 log.go:172] (0xc00013c840) (0xc0005e9540) Create stream\nI0604 12:30:12.324638 3865 log.go:172] (0xc00013c840) (0xc0005e9540) Stream added, broadcasting: 1\nI0604 12:30:12.327162 3865 log.go:172] (0xc00013c840) Reply frame received for 1\nI0604 12:30:12.327220 3865 log.go:172] (0xc00013c840) (0xc00068e000) Create stream\nI0604 12:30:12.327243 3865 log.go:172] (0xc00013c840) (0xc00068e000) Stream added, broadcasting: 3\nI0604 12:30:12.328286 3865 log.go:172] (0xc00013c840) Reply frame received for 3\nI0604 12:30:12.328326 3865 log.go:172] (0xc00013c840) (0xc0006f4000) Create stream\nI0604 12:30:12.328343 3865 log.go:172] (0xc00013c840) (0xc0006f4000) Stream added, broadcasting: 5\nI0604 12:30:12.329343 3865 log.go:172] (0xc00013c840) Reply frame received for 5\nI0604 12:30:12.389473 3865 log.go:172] (0xc00013c840) Data frame received for 3\nI0604 12:30:12.389499 3865 log.go:172] (0xc00068e000) (3) Data frame handling\nI0604 12:30:12.389515 3865 log.go:172] (0xc00068e000) (3) Data frame sent\nI0604 12:30:12.389520 3865 log.go:172] (0xc00013c840) Data frame received for 3\nI0604 12:30:12.389524 3865 log.go:172] (0xc00068e000) (3) Data frame handling\nI0604 12:30:12.389776 3865 log.go:172] (0xc00013c840) Data frame received for 5\nI0604 12:30:12.389797 3865 log.go:172] (0xc0006f4000) (5) Data frame handling\nI0604 12:30:12.391201 3865 log.go:172] (0xc00013c840) Data frame received for 1\nI0604 12:30:12.391220 3865 log.go:172] (0xc0005e9540) (1) Data frame handling\nI0604 12:30:12.391229 3865 log.go:172] (0xc0005e9540) (1) Data frame sent\nI0604 12:30:12.391240 3865 log.go:172] (0xc00013c840) (0xc0005e9540) Stream removed, broadcasting: 1\nI0604 12:30:12.391255 3865 log.go:172] (0xc00013c840) Go away received\nI0604 12:30:12.391594 3865 log.go:172] (0xc00013c840) (0xc0005e9540) Stream removed, broadcasting: 1\nI0604 12:30:12.391613 3865 log.go:172] (0xc00013c840) (0xc00068e000) Stream removed, broadcasting: 3\nI0604 12:30:12.391620 3865 log.go:172] (0xc00013c840) (0xc0006f4000) Stream removed, broadcasting: 5\n" Jun 4 12:30:12.396: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 4 12:30:12.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 4 12:30:12.396: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 4 12:30:42.412: INFO: Deleting all statefulset in ns e2e-tests-statefulset-bdxh5 Jun 4 12:30:42.415: INFO: Scaling statefulset ss to 0 Jun 4 12:30:42.424: INFO: Waiting for statefulset status.replicas updated to 0 Jun 4 12:30:42.426: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:30:42.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-bdxh5" for this suite. Jun 4 12:30:48.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:30:48.618: INFO: namespace: e2e-tests-statefulset-bdxh5, resource: bindings, ignored listing per whitelist Jun 4 12:30:48.627: INFO: namespace e2e-tests-statefulset-bdxh5 deletion completed in 6.186124526s • [SLOW TEST:98.571 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:30:48.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-vj9m STEP: Creating a pod to test atomic-volume-subpath Jun 4 12:30:48.742: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vj9m" in namespace "e2e-tests-subpath-gf8wn" to be "success or failure" Jun 4 12:30:48.806: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Pending", Reason="", readiness=false. Elapsed: 64.736795ms Jun 4 12:30:50.811: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069391143s Jun 4 12:30:52.815: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073715439s Jun 4 12:30:54.819: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=true. Elapsed: 6.077232116s Jun 4 12:30:56.823: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 8.081719585s Jun 4 12:30:58.827: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 10.08520305s Jun 4 12:31:00.832: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 12.090399945s Jun 4 12:31:02.837: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 14.095496805s Jun 4 12:31:04.842: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 16.100006812s Jun 4 12:31:06.847: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 18.104870333s Jun 4 12:31:08.860: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 20.11870214s Jun 4 12:31:10.866: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 22.123834987s Jun 4 12:31:12.870: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Running", Reason="", readiness=false. Elapsed: 24.128140207s Jun 4 12:31:14.875: INFO: Pod "pod-subpath-test-projected-vj9m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.13304553s STEP: Saw pod success Jun 4 12:31:14.875: INFO: Pod "pod-subpath-test-projected-vj9m" satisfied condition "success or failure" Jun 4 12:31:14.880: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-vj9m container test-container-subpath-projected-vj9m: STEP: delete the pod Jun 4 12:31:14.940: INFO: Waiting for pod pod-subpath-test-projected-vj9m to disappear Jun 4 12:31:14.944: INFO: Pod pod-subpath-test-projected-vj9m no longer exists STEP: Deleting pod pod-subpath-test-projected-vj9m Jun 4 12:31:14.944: INFO: Deleting pod "pod-subpath-test-projected-vj9m" in namespace "e2e-tests-subpath-gf8wn" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:31:14.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-gf8wn" for this suite. Jun 4 12:31:20.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:31:20.984: INFO: namespace: e2e-tests-subpath-gf8wn, resource: bindings, ignored listing per whitelist Jun 4 12:31:21.038: INFO: namespace e2e-tests-subpath-gf8wn deletion completed in 6.090041048s • [SLOW TEST:32.411 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:31:21.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jun 4 12:31:25.168: INFO: Pod pod-hostip-4bc47159-a65f-11ea-86dc-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:31:25.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dpdfk" for this suite. Jun 4 12:31:55.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:31:55.259: INFO: namespace: e2e-tests-pods-dpdfk, resource: bindings, ignored listing per whitelist Jun 4 12:31:55.259: INFO: namespace e2e-tests-pods-dpdfk deletion completed in 30.086913619s • [SLOW TEST:34.220 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:31:55.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 4 12:31:55.363: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:31:59.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-n7vx2" for this suite. Jun 4 12:32:39.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:32:39.609: INFO: namespace: e2e-tests-pods-n7vx2, resource: bindings, ignored listing per whitelist Jun 4 12:32:39.631: INFO: namespace e2e-tests-pods-n7vx2 deletion completed in 40.105421468s • [SLOW TEST:44.372 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:32:39.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-k8cw STEP: Creating a pod to test atomic-volume-subpath Jun 4 12:32:39.732: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-k8cw" in namespace "e2e-tests-subpath-bv9d6" to be "success or failure" Jun 4 12:32:39.735: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.770926ms Jun 4 12:32:41.748: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015840665s Jun 4 12:32:43.751: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019032133s Jun 4 12:32:45.755: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=true. Elapsed: 6.022936513s Jun 4 12:32:47.770: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 8.037883686s Jun 4 12:32:49.772: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 10.040554087s Jun 4 12:32:51.782: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 12.04981948s Jun 4 12:32:53.788: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 14.055966806s Jun 4 12:32:55.807: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 16.074840355s Jun 4 12:32:57.821: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 18.088891035s Jun 4 12:32:59.825: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 20.093104431s Jun 4 12:33:01.829: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 22.097222248s Jun 4 12:33:03.833: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Running", Reason="", readiness=false. Elapsed: 24.100914306s Jun 4 12:33:05.837: INFO: Pod "pod-subpath-test-downwardapi-k8cw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.104810742s STEP: Saw pod success Jun 4 12:33:05.837: INFO: Pod "pod-subpath-test-downwardapi-k8cw" satisfied condition "success or failure" Jun 4 12:33:05.840: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-k8cw container test-container-subpath-downwardapi-k8cw: STEP: delete the pod Jun 4 12:33:05.865: INFO: Waiting for pod pod-subpath-test-downwardapi-k8cw to disappear Jun 4 12:33:05.871: INFO: Pod pod-subpath-test-downwardapi-k8cw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-k8cw Jun 4 12:33:05.871: INFO: Deleting pod "pod-subpath-test-downwardapi-k8cw" in namespace "e2e-tests-subpath-bv9d6" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:33:05.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-bv9d6" for this suite. Jun 4 12:33:11.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:33:11.978: INFO: namespace: e2e-tests-subpath-bv9d6, resource: bindings, ignored listing per whitelist Jun 4 12:33:11.982: INFO: namespace e2e-tests-subpath-bv9d6 deletion completed in 6.105521043s • [SLOW TEST:32.351 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:33:11.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-8de12ab7-a65f-11ea-86dc-0242ac110018 STEP: Creating a pod to test consume configMaps Jun 4 12:33:12.079: INFO: Waiting up to 5m0s for pod "pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018" in namespace "e2e-tests-configmap-nwn99" to be "success or failure" Jun 4 12:33:12.099: INFO: Pod "pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.046751ms Jun 4 12:33:14.104: INFO: Pod "pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024798079s Jun 4 12:33:16.108: INFO: Pod "pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029282999s STEP: Saw pod success Jun 4 12:33:16.108: INFO: Pod "pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:33:16.112: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018 container configmap-volume-test: STEP: delete the pod Jun 4 12:33:16.133: INFO: Waiting for pod pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018 to disappear Jun 4 12:33:16.137: INFO: Pod pod-configmaps-8de1dfb3-a65f-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:33:16.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nwn99" for this suite. Jun 4 12:33:22.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:33:22.212: INFO: namespace: e2e-tests-configmap-nwn99, resource: bindings, ignored listing per whitelist Jun 4 12:33:22.227: INFO: namespace e2e-tests-configmap-nwn99 deletion completed in 6.086309695s • [SLOW TEST:10.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 4 12:33:22.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 4 12:33:22.332: INFO: Waiting up to 5m0s for pod "pod-93fd8104-a65f-11ea-86dc-0242ac110018" in namespace "e2e-tests-emptydir-mj5pr" to be "success or failure" Jun 4 12:33:22.335: INFO: Pod "pod-93fd8104-a65f-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.290366ms Jun 4 12:33:24.366: INFO: Pod "pod-93fd8104-a65f-11ea-86dc-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033887995s Jun 4 12:33:26.370: INFO: Pod "pod-93fd8104-a65f-11ea-86dc-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038473959s STEP: Saw pod success Jun 4 12:33:26.371: INFO: Pod "pod-93fd8104-a65f-11ea-86dc-0242ac110018" satisfied condition "success or failure" Jun 4 12:33:26.374: INFO: Trying to get logs from node hunter-worker2 pod pod-93fd8104-a65f-11ea-86dc-0242ac110018 container test-container: STEP: delete the pod Jun 4 12:33:26.455: INFO: Waiting for pod pod-93fd8104-a65f-11ea-86dc-0242ac110018 to disappear Jun 4 12:33:26.465: INFO: Pod pod-93fd8104-a65f-11ea-86dc-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 4 12:33:26.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mj5pr" for this suite. Jun 4 12:33:32.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 4 12:33:32.494: INFO: namespace: e2e-tests-emptydir-mj5pr, resource: bindings, ignored listing per whitelist Jun 4 12:33:32.560: INFO: namespace e2e-tests-emptydir-mj5pr deletion completed in 6.091959419s • [SLOW TEST:10.333 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSJun 4 12:33:32.560: INFO: Running AfterSuite actions on all nodes Jun 4 12:33:32.560: INFO: Running AfterSuite actions on node 1 Jun 4 12:33:32.560: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6398.373 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS