I0615 10:46:54.343208 6 e2e.go:224] Starting e2e run "86b83ddd-aef5-11ea-99db-0242ac11001b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1592218013 - Will randomize all specs Will run 201 of 2164 specs Jun 15 10:46:54.529: INFO: >>> kubeConfig: /root/.kube/config Jun 15 10:46:54.531: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 15 10:46:54.547: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 15 10:46:54.583: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 15 10:46:54.583: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 15 10:46:54.583: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 15 10:46:54.592: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 15 10:46:54.592: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 15 10:46:54.592: INFO: e2e test version: v1.13.12 Jun 15 10:46:54.593: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:46:54.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller Jun 15 10:46:54.734: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b Jun 15 10:46:54.772: INFO: Pod name my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b: Found 0 pods out of 1 Jun 15 10:46:59.776: INFO: Pod name my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b: Found 1 pods out of 1 Jun 15 10:46:59.776: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b" are running Jun 15 10:46:59.779: INFO: Pod "my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b-ccttf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 10:46:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 10:46:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 10:46:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 10:46:54 +0000 UTC Reason: Message:}]) Jun 15 10:46:59.779: INFO: Trying to dial the pod Jun 15 10:47:04.791: INFO: Controller my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b: Got expected result from replica 1 [my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b-ccttf]: "my-hostname-basic-873cef5f-aef5-11ea-99db-0242ac11001b-ccttf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:47:04.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-qv46w" for this suite. Jun 15 10:47:10.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:47:10.833: INFO: namespace: e2e-tests-replication-controller-qv46w, resource: bindings, ignored listing per whitelist Jun 15 10:47:10.900: INFO: namespace e2e-tests-replication-controller-qv46w deletion completed in 6.105586506s • [SLOW TEST:16.307 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:47:10.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-90f337ad-aef5-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 10:47:11.053: INFO: Waiting up to 5m0s for pod "pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-7qvqc" to be "success or failure" Jun 15 10:47:11.056: INFO: Pod "pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904802ms Jun 15 10:47:13.060: INFO: Pod "pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007341446s Jun 15 10:47:15.064: INFO: Pod "pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011118446s Jun 15 10:47:17.069: INFO: Pod "pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016031561s STEP: Saw pod success Jun 15 10:47:17.069: INFO: Pod "pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 10:47:17.073: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 10:47:17.094: INFO: Waiting for pod pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b to disappear Jun 15 10:47:17.098: INFO: Pod pod-configmaps-90f58099-aef5-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:47:17.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7qvqc" for this suite. Jun 15 10:47:23.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:47:23.180: INFO: namespace: e2e-tests-configmap-7qvqc, resource: bindings, ignored listing per whitelist Jun 15 10:47:23.208: INFO: namespace e2e-tests-configmap-7qvqc deletion completed in 6.10674989s • [SLOW TEST:12.308 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:47:23.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ggdql.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ggdql.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ggdql.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-ggdql.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-ggdql.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ggdql.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 15 10:47:31.435: INFO: DNS probes using e2e-tests-dns-ggdql/dns-test-9845eff9-aef5-11ea-99db-0242ac11001b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:47:31.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-ggdql" for this suite. Jun 15 10:47:37.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:47:37.525: INFO: namespace: e2e-tests-dns-ggdql, resource: bindings, ignored listing per whitelist Jun 15 10:47:37.555: INFO: namespace e2e-tests-dns-ggdql deletion completed in 6.065925924s • [SLOW TEST:14.347 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:47:37.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a0cfbae2-aef5-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 10:47:37.734: INFO: Waiting up to 5m0s for pod "pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-pvdmc" to be "success or failure" Jun 15 10:47:37.739: INFO: Pod "pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.811847ms Jun 15 10:47:39.744: INFO: Pod "pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010228196s Jun 15 10:47:41.748: INFO: Pod "pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014306314s STEP: Saw pod success Jun 15 10:47:41.748: INFO: Pod "pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 10:47:41.751: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 10:47:41.794: INFO: Waiting for pod pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b to disappear Jun 15 10:47:41.818: INFO: Pod pod-secrets-a0dcebf5-aef5-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:47:41.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pvdmc" for this suite. Jun 15 10:47:47.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:47:47.862: INFO: namespace: e2e-tests-secrets-pvdmc, resource: bindings, ignored listing per whitelist Jun 15 10:47:47.916: INFO: namespace e2e-tests-secrets-pvdmc deletion completed in 6.094092814s • [SLOW TEST:10.361 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:47:47.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-a7091f4f-aef5-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 10:47:48.095: INFO: Waiting up to 5m0s for pod "pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-f59sj" to be "success or failure" Jun 15 10:47:48.166: INFO: Pod "pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 71.749877ms Jun 15 10:47:50.170: INFO: Pod "pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075368876s Jun 15 10:47:52.452: INFO: Pod "pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.357002704s STEP: Saw pod success Jun 15 10:47:52.452: INFO: Pod "pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 10:47:52.455: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 10:47:52.527: INFO: Waiting for pod pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b to disappear Jun 15 10:47:52.643: INFO: Pod pod-secrets-a70990ba-aef5-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:47:52.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-f59sj" for this suite. Jun 15 10:47:59.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:47:59.586: INFO: namespace: e2e-tests-secrets-f59sj, resource: bindings, ignored listing per whitelist Jun 15 10:47:59.626: INFO: namespace e2e-tests-secrets-f59sj deletion completed in 6.979005756s • [SLOW TEST:11.710 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:47:59.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jun 15 10:48:03.860: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:48:27.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9hg6c" for this suite. Jun 15 10:48:34.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:48:34.076: INFO: namespace: e2e-tests-namespaces-9hg6c, resource: bindings, ignored listing per whitelist Jun 15 10:48:34.114: INFO: namespace e2e-tests-namespaces-9hg6c deletion completed in 6.128320574s STEP: Destroying namespace "e2e-tests-nsdeletetest-4294v" for this suite. Jun 15 10:48:34.117: INFO: Namespace e2e-tests-nsdeletetest-4294v was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4fvd5" for this suite. Jun 15 10:48:40.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:48:40.172: INFO: namespace: e2e-tests-nsdeletetest-4fvd5, resource: bindings, ignored listing per whitelist Jun 15 10:48:40.213: INFO: namespace e2e-tests-nsdeletetest-4fvd5 deletion completed in 6.096171966s • [SLOW TEST:40.587 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:48:40.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 15 10:48:40.949: INFO: Pod name wrapped-volume-race-c684e4ef-aef5-11ea-99db-0242ac11001b: Found 0 pods out of 5 Jun 15 10:48:45.958: INFO: Pod name wrapped-volume-race-c684e4ef-aef5-11ea-99db-0242ac11001b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c684e4ef-aef5-11ea-99db-0242ac11001b in namespace e2e-tests-emptydir-wrapper-465dc, will wait for the garbage collector to delete the pods Jun 15 10:51:18.607: INFO: Deleting ReplicationController wrapped-volume-race-c684e4ef-aef5-11ea-99db-0242ac11001b took: 6.032524ms Jun 15 10:51:18.807: INFO: Terminating ReplicationController wrapped-volume-race-c684e4ef-aef5-11ea-99db-0242ac11001b pods took: 200.224509ms STEP: Creating RC which spawns configmap-volume pods Jun 15 10:52:01.866: INFO: Pod name wrapped-volume-race-3e4465a8-aef6-11ea-99db-0242ac11001b: Found 0 pods out of 5 Jun 15 10:52:06.873: INFO: Pod name wrapped-volume-race-3e4465a8-aef6-11ea-99db-0242ac11001b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3e4465a8-aef6-11ea-99db-0242ac11001b in namespace e2e-tests-emptydir-wrapper-465dc, will wait for the garbage collector to delete the pods Jun 15 10:54:42.970: INFO: Deleting ReplicationController wrapped-volume-race-3e4465a8-aef6-11ea-99db-0242ac11001b took: 8.171701ms Jun 15 10:54:43.070: INFO: Terminating ReplicationController wrapped-volume-race-3e4465a8-aef6-11ea-99db-0242ac11001b pods took: 100.320247ms STEP: Creating RC which spawns configmap-volume pods Jun 15 10:55:21.408: INFO: Pod name wrapped-volume-race-b5374391-aef6-11ea-99db-0242ac11001b: Found 0 pods out of 5 Jun 15 10:55:26.414: INFO: Pod name wrapped-volume-race-b5374391-aef6-11ea-99db-0242ac11001b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b5374391-aef6-11ea-99db-0242ac11001b in namespace e2e-tests-emptydir-wrapper-465dc, will wait for the garbage collector to delete the pods Jun 15 10:57:36.499: INFO: Deleting ReplicationController wrapped-volume-race-b5374391-aef6-11ea-99db-0242ac11001b took: 7.133527ms Jun 15 10:57:36.600: INFO: Terminating ReplicationController wrapped-volume-race-b5374391-aef6-11ea-99db-0242ac11001b pods took: 100.356479ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:58:22.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-465dc" for this suite. Jun 15 10:58:30.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:58:30.260: INFO: namespace: e2e-tests-emptydir-wrapper-465dc, resource: bindings, ignored listing per whitelist Jun 15 10:58:30.306: INFO: namespace e2e-tests-emptydir-wrapper-465dc deletion completed in 8.096510826s • [SLOW TEST:590.092 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:58:30.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 15 10:58:34.447: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-25e360f9-aef7-11ea-99db-0242ac11001b", GenerateName:"", Namespace:"e2e-tests-pods-c46fp", SelfLink:"/api/v1/namespaces/e2e-tests-pods-c46fp/pods/pod-submit-remove-25e360f9-aef7-11ea-99db-0242ac11001b", UID:"25e7026d-aef7-11ea-99e8-0242ac110002", ResourceVersion:"16064393", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727815510, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"403514289"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-g8zw2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001de5200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-g8zw2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001b20d38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010bbf20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b20d80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001b20da0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001b20da8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001b20dac)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727815510, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727815514, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727815514, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727815510, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.224", StartTime:(*v1.Time)(0xc000b10ba0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000b10bc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://b4b0fedce76a44fad7539ae2da5447826ad51415626649da49bf8ac234cef88a"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:58:41.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-c46fp" for this suite. Jun 15 10:58:47.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:58:47.299: INFO: namespace: e2e-tests-pods-c46fp, resource: bindings, ignored listing per whitelist Jun 15 10:58:47.357: INFO: namespace e2e-tests-pods-c46fp deletion completed in 6.091649618s • [SLOW TEST:17.051 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:58:47.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jun 15 10:58:47.428: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 15 10:58:47.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:58:50.065: INFO: stderr: "" Jun 15 10:58:50.065: INFO: stdout: "service/redis-slave created\n" Jun 15 10:58:50.065: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 15 10:58:50.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:58:50.361: INFO: stderr: "" Jun 15 10:58:50.361: INFO: stdout: "service/redis-master created\n" Jun 15 10:58:50.361: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 15 10:58:50.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:58:50.669: INFO: stderr: "" Jun 15 10:58:50.669: INFO: stdout: "service/frontend created\n" Jun 15 10:58:50.669: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 15 10:58:50.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:58:50.908: INFO: stderr: "" Jun 15 10:58:50.908: INFO: stdout: "deployment.extensions/frontend created\n" Jun 15 10:58:50.909: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 15 10:58:50.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:58:52.012: INFO: stderr: "" Jun 15 10:58:52.013: INFO: stdout: "deployment.extensions/redis-master created\n" Jun 15 10:58:52.013: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 15 10:58:52.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:58:52.298: INFO: stderr: "" Jun 15 10:58:52.298: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jun 15 10:58:52.298: INFO: Waiting for all frontend pods to be Running. Jun 15 10:59:02.348: INFO: Waiting for frontend to serve content. Jun 15 10:59:02.476: INFO: Trying to add a new entry to the guestbook. Jun 15 10:59:02.492: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 15 10:59:02.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:59:02.789: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 10:59:02.789: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 15 10:59:02.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:59:02.992: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 10:59:02.992: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 15 10:59:02.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:59:03.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 10:59:03.251: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 15 10:59:03.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:59:03.481: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 10:59:03.481: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 15 10:59:03.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:59:03.604: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 10:59:03.604: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 15 10:59:03.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-fr8d8' Jun 15 10:59:03.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 10:59:03.706: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 10:59:03.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fr8d8" for this suite. Jun 15 10:59:44.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 10:59:44.037: INFO: namespace: e2e-tests-kubectl-fr8d8, resource: bindings, ignored listing per whitelist Jun 15 10:59:44.109: INFO: namespace e2e-tests-kubectl-fr8d8 deletion completed in 40.250516578s • [SLOW TEST:56.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 10:59:44.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 15 10:59:54.804: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 10:59:54.812: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 10:59:56.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 10:59:56.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 10:59:58.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 10:59:58.817: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:00.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:00.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:02.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:02.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:04.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:04.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:06.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:06.924: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:08.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:08.817: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:10.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:10.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:12.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:12.817: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:14.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:14.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:16.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:16.850: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:18.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:18.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:20.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:20.816: INFO: Pod pod-with-prestop-exec-hook still exists Jun 15 11:00:22.812: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 15 11:00:22.816: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:00:22.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-hg8tq" for this suite. Jun 15 11:00:44.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:00:44.894: INFO: namespace: e2e-tests-container-lifecycle-hook-hg8tq, resource: bindings, ignored listing per whitelist Jun 15 11:00:44.937: INFO: namespace e2e-tests-container-lifecycle-hook-hg8tq deletion completed in 22.108806092s • [SLOW TEST:60.827 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:00:44.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:00:45.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-pl5g8" to be "success or failure" Jun 15 11:00:45.118: INFO: Pod "downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.805869ms Jun 15 11:00:47.123: INFO: Pod "downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037281105s Jun 15 11:00:49.127: INFO: Pod "downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041617393s STEP: Saw pod success Jun 15 11:00:49.127: INFO: Pod "downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:00:49.130: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:00:49.186: INFO: Waiting for pod downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b to disappear Jun 15 11:00:49.189: INFO: Pod downwardapi-volume-7626653c-aef7-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:00:49.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pl5g8" for this suite. Jun 15 11:00:55.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:00:55.217: INFO: namespace: e2e-tests-projected-pl5g8, resource: bindings, ignored listing per whitelist Jun 15 11:00:55.277: INFO: namespace e2e-tests-projected-pl5g8 deletion completed in 6.084239223s • [SLOW TEST:10.340 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:00:55.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-ngpn STEP: Creating a pod to test atomic-volume-subpath Jun 15 11:00:55.423: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ngpn" in namespace "e2e-tests-subpath-zhzpb" to be "success or failure" Jun 15 11:00:55.429: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19499ms Jun 15 11:00:57.728: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305360623s Jun 15 11:00:59.732: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309247988s Jun 15 11:01:01.757: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.333646074s Jun 15 11:01:03.761: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 8.338170081s Jun 15 11:01:05.766: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 10.342638663s Jun 15 11:01:07.770: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 12.346905339s Jun 15 11:01:09.774: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 14.351504461s Jun 15 11:01:11.779: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 16.355854888s Jun 15 11:01:13.783: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 18.360413363s Jun 15 11:01:15.788: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 20.364821926s Jun 15 11:01:17.791: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 22.368434528s Jun 15 11:01:19.795: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 24.37197208s Jun 15 11:01:21.932: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Running", Reason="", readiness=false. Elapsed: 26.509139498s Jun 15 11:01:23.937: INFO: Pod "pod-subpath-test-configmap-ngpn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.513639019s STEP: Saw pod success Jun 15 11:01:23.937: INFO: Pod "pod-subpath-test-configmap-ngpn" satisfied condition "success or failure" Jun 15 11:01:23.940: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-ngpn container test-container-subpath-configmap-ngpn: STEP: delete the pod Jun 15 11:01:24.005: INFO: Waiting for pod pod-subpath-test-configmap-ngpn to disappear Jun 15 11:01:24.010: INFO: Pod pod-subpath-test-configmap-ngpn no longer exists STEP: Deleting pod pod-subpath-test-configmap-ngpn Jun 15 11:01:24.010: INFO: Deleting pod "pod-subpath-test-configmap-ngpn" in namespace "e2e-tests-subpath-zhzpb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:01:24.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-zhzpb" for this suite. Jun 15 11:01:30.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:01:30.094: INFO: namespace: e2e-tests-subpath-zhzpb, resource: bindings, ignored listing per whitelist Jun 15 11:01:30.143: INFO: namespace e2e-tests-subpath-zhzpb deletion completed in 6.129590135s • [SLOW TEST:34.866 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:01:30.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jun 15 11:01:30.252: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:01:30.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cwdzj" for this suite. Jun 15 11:01:36.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:01:36.389: INFO: namespace: e2e-tests-kubectl-cwdzj, resource: bindings, ignored listing per whitelist Jun 15 11:01:36.443: INFO: namespace e2e-tests-kubectl-cwdzj deletion completed in 6.098192752s • [SLOW TEST:6.300 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:01:36.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-94d85141-aef7-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:01:36.606: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-9ptgx" to be "success or failure" Jun 15 11:01:36.610: INFO: Pod "pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461553ms Jun 15 11:01:38.614: INFO: Pod "pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007377364s Jun 15 11:01:40.625: INFO: Pod "pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01895598s STEP: Saw pod success Jun 15 11:01:40.625: INFO: Pod "pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:01:40.628: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 15 11:01:40.662: INFO: Waiting for pod pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b to disappear Jun 15 11:01:40.670: INFO: Pod pod-projected-secrets-94de0f3e-aef7-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:01:40.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9ptgx" for this suite. Jun 15 11:01:46.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:01:46.743: INFO: namespace: e2e-tests-projected-9ptgx, resource: bindings, ignored listing per whitelist Jun 15 11:01:46.767: INFO: namespace e2e-tests-projected-9ptgx deletion completed in 6.093547883s • [SLOW TEST:10.323 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:01:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 15 11:01:46.899: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 15 11:01:46.914: INFO: Waiting for terminating namespaces to be deleted... Jun 15 11:01:46.917: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 15 11:01:46.922: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 11:01:46.922: INFO: Container kindnet-cni ready: true, restart count 0 Jun 15 11:01:46.922: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 15 11:01:46.922: INFO: Container coredns ready: true, restart count 0 Jun 15 11:01:46.922: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 15 11:01:46.922: INFO: Container kube-proxy ready: true, restart count 0 Jun 15 11:01:46.922: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 15 11:01:46.927: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 11:01:46.928: INFO: Container kindnet-cni ready: true, restart count 0 Jun 15 11:01:46.928: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 15 11:01:46.928: INFO: Container coredns ready: true, restart count 0 Jun 15 11:01:46.928: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 11:01:46.928: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1618b2a8f10636b7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:01:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-jl8c8" for this suite. Jun 15 11:01:53.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:01:53.995: INFO: namespace: e2e-tests-sched-pred-jl8c8, resource: bindings, ignored listing per whitelist Jun 15 11:01:54.055: INFO: namespace e2e-tests-sched-pred-jl8c8 deletion completed in 6.102063457s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.288 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:01:54.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:01:54.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-5clpg" to be "success or failure" Jun 15 11:01:54.224: INFO: Pod "downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 60.140357ms Jun 15 11:01:56.229: INFO: Pod "downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0644322s Jun 15 11:01:58.234: INFO: Pod "downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06953645s STEP: Saw pod success Jun 15 11:01:58.234: INFO: Pod "downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:01:58.237: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:01:58.299: INFO: Waiting for pod downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b to disappear Jun 15 11:01:58.305: INFO: Pod downwardapi-volume-9f537232-aef7-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:01:58.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5clpg" for this suite. Jun 15 11:02:04.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:02:04.362: INFO: namespace: e2e-tests-projected-5clpg, resource: bindings, ignored listing per whitelist Jun 15 11:02:04.443: INFO: namespace e2e-tests-projected-5clpg deletion completed in 6.135051272s • [SLOW TEST:10.388 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:02:04.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jun 15 11:02:05.088: INFO: created pod pod-service-account-defaultsa Jun 15 11:02:05.088: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 15 11:02:05.096: INFO: created pod pod-service-account-mountsa Jun 15 11:02:05.096: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 15 11:02:05.129: INFO: created pod pod-service-account-nomountsa Jun 15 11:02:05.129: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 15 11:02:05.137: INFO: created pod pod-service-account-defaultsa-mountspec Jun 15 11:02:05.137: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 15 11:02:05.181: INFO: created pod pod-service-account-mountsa-mountspec Jun 15 11:02:05.181: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 15 11:02:05.193: INFO: created pod pod-service-account-nomountsa-mountspec Jun 15 11:02:05.193: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 15 11:02:05.222: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 15 11:02:05.222: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 15 11:02:05.264: INFO: created pod pod-service-account-mountsa-nomountspec Jun 15 11:02:05.264: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 15 11:02:05.328: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 15 11:02:05.328: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:02:05.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-svm5k" for this suite. Jun 15 11:02:35.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:02:35.483: INFO: namespace: e2e-tests-svcaccounts-svm5k, resource: bindings, ignored listing per whitelist Jun 15 11:02:35.530: INFO: namespace e2e-tests-svcaccounts-svm5k deletion completed in 30.138797953s • [SLOW TEST:31.087 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:02:35.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 15 11:02:42.713: INFO: 4 pods remaining Jun 15 11:02:42.713: INFO: 0 pods has nil DeletionTimestamp Jun 15 11:02:42.713: INFO: Jun 15 11:02:43.415: INFO: 0 pods remaining Jun 15 11:02:43.415: INFO: 0 pods has nil DeletionTimestamp Jun 15 11:02:43.415: INFO: STEP: Gathering metrics W0615 11:02:44.760675 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 15 11:02:44.760: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:02:44.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-hm7s6" for this suite. Jun 15 11:02:51.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:02:51.655: INFO: namespace: e2e-tests-gc-hm7s6, resource: bindings, ignored listing per whitelist Jun 15 11:02:51.667: INFO: namespace e2e-tests-gc-hm7s6 deletion completed in 6.631043808s • [SLOW TEST:16.136 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:02:51.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:02:51.937: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c1b3441f-aef7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001eeb44a), BlockOwnerDeletion:(*bool)(0xc001eeb44b)}} Jun 15 11:02:51.960: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c1b1e708-aef7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001e313a2), BlockOwnerDeletion:(*bool)(0xc001e313a3)}} Jun 15 11:02:51.972: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c1b27302-aef7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001ac84c2), BlockOwnerDeletion:(*bool)(0xc001ac84c3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:02:57.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-nxcdh" for this suite. Jun 15 11:03:03.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:03:03.096: INFO: namespace: e2e-tests-gc-nxcdh, resource: bindings, ignored listing per whitelist Jun 15 11:03:03.139: INFO: namespace e2e-tests-gc-nxcdh deletion completed in 6.097656363s • [SLOW TEST:11.473 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:03:03.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:03:07.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-45qhb" for this suite. Jun 15 11:03:13.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:03:13.356: INFO: namespace: e2e-tests-emptydir-wrapper-45qhb, resource: bindings, ignored listing per whitelist Jun 15 11:03:13.421: INFO: namespace e2e-tests-emptydir-wrapper-45qhb deletion completed in 6.095682292s • [SLOW TEST:10.282 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:03:13.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-w7zvz STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 15 11:03:13.526: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 15 11:03:41.613: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostName&protocol=http&host=10.244.2.200&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-w7zvz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:03:41.613: INFO: >>> kubeConfig: /root/.kube/config I0615 11:03:41.640138 6 log.go:172] (0xc001518160) (0xc000d4a320) Create stream I0615 11:03:41.640173 6 log.go:172] (0xc001518160) (0xc000d4a320) Stream added, broadcasting: 1 I0615 11:03:41.643640 6 log.go:172] (0xc001518160) Reply frame received for 1 I0615 11:03:41.643689 6 log.go:172] (0xc001518160) (0xc001e5c1e0) Create stream I0615 11:03:41.643706 6 log.go:172] (0xc001518160) (0xc001e5c1e0) Stream added, broadcasting: 3 I0615 11:03:41.644781 6 log.go:172] (0xc001518160) Reply frame received for 3 I0615 11:03:41.644816 6 log.go:172] (0xc001518160) (0xc001e74be0) Create stream I0615 11:03:41.644828 6 log.go:172] (0xc001518160) (0xc001e74be0) Stream added, broadcasting: 5 I0615 11:03:41.646013 6 log.go:172] (0xc001518160) Reply frame received for 5 I0615 11:03:41.818324 6 log.go:172] (0xc001518160) Data frame received for 3 I0615 11:03:41.818358 6 log.go:172] (0xc001e5c1e0) (3) Data frame handling I0615 11:03:41.818378 6 log.go:172] (0xc001e5c1e0) (3) Data frame sent I0615 11:03:41.818961 6 log.go:172] (0xc001518160) Data frame received for 3 I0615 11:03:41.818990 6 log.go:172] (0xc001e5c1e0) (3) Data frame handling I0615 11:03:41.819320 6 log.go:172] (0xc001518160) Data frame received for 5 I0615 11:03:41.819338 6 log.go:172] (0xc001e74be0) (5) Data frame handling I0615 11:03:41.820782 6 log.go:172] (0xc001518160) Data frame received for 1 I0615 11:03:41.820823 6 log.go:172] (0xc000d4a320) (1) Data frame handling I0615 11:03:41.820856 6 log.go:172] (0xc000d4a320) (1) Data frame sent I0615 11:03:41.820884 6 log.go:172] (0xc001518160) (0xc000d4a320) Stream removed, broadcasting: 1 I0615 11:03:41.820930 6 log.go:172] (0xc001518160) Go away received I0615 11:03:41.821435 6 log.go:172] (0xc001518160) (0xc000d4a320) Stream removed, broadcasting: 1 I0615 11:03:41.821470 6 log.go:172] (0xc001518160) (0xc001e5c1e0) Stream removed, broadcasting: 3 I0615 11:03:41.821492 6 log.go:172] (0xc001518160) (0xc001e74be0) Stream removed, broadcasting: 5 Jun 15 11:03:41.821: INFO: Waiting for endpoints: map[] Jun 15 11:03:41.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostName&protocol=http&host=10.244.1.242&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-w7zvz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:03:41.825: INFO: >>> kubeConfig: /root/.kube/config I0615 11:03:41.858746 6 log.go:172] (0xc001b4e2c0) (0xc001e5c3c0) Create stream I0615 11:03:41.858773 6 log.go:172] (0xc001b4e2c0) (0xc001e5c3c0) Stream added, broadcasting: 1 I0615 11:03:41.860891 6 log.go:172] (0xc001b4e2c0) Reply frame received for 1 I0615 11:03:41.860935 6 log.go:172] (0xc001b4e2c0) (0xc001e74d20) Create stream I0615 11:03:41.860963 6 log.go:172] (0xc001b4e2c0) (0xc001e74d20) Stream added, broadcasting: 3 I0615 11:03:41.861996 6 log.go:172] (0xc001b4e2c0) Reply frame received for 3 I0615 11:03:41.862034 6 log.go:172] (0xc001b4e2c0) (0xc000c4b360) Create stream I0615 11:03:41.862048 6 log.go:172] (0xc001b4e2c0) (0xc000c4b360) Stream added, broadcasting: 5 I0615 11:03:41.862915 6 log.go:172] (0xc001b4e2c0) Reply frame received for 5 I0615 11:03:41.935159 6 log.go:172] (0xc001b4e2c0) Data frame received for 3 I0615 11:03:41.935203 6 log.go:172] (0xc001e74d20) (3) Data frame handling I0615 11:03:41.935236 6 log.go:172] (0xc001e74d20) (3) Data frame sent I0615 11:03:41.935818 6 log.go:172] (0xc001b4e2c0) Data frame received for 3 I0615 11:03:41.935850 6 log.go:172] (0xc001e74d20) (3) Data frame handling I0615 11:03:41.935921 6 log.go:172] (0xc001b4e2c0) Data frame received for 5 I0615 11:03:41.935940 6 log.go:172] (0xc000c4b360) (5) Data frame handling I0615 11:03:41.937657 6 log.go:172] (0xc001b4e2c0) Data frame received for 1 I0615 11:03:41.937676 6 log.go:172] (0xc001e5c3c0) (1) Data frame handling I0615 11:03:41.937692 6 log.go:172] (0xc001e5c3c0) (1) Data frame sent I0615 11:03:41.937707 6 log.go:172] (0xc001b4e2c0) (0xc001e5c3c0) Stream removed, broadcasting: 1 I0615 11:03:41.937725 6 log.go:172] (0xc001b4e2c0) Go away received I0615 11:03:41.937838 6 log.go:172] (0xc001b4e2c0) (0xc001e5c3c0) Stream removed, broadcasting: 1 I0615 11:03:41.937853 6 log.go:172] (0xc001b4e2c0) (0xc001e74d20) Stream removed, broadcasting: 3 I0615 11:03:41.937859 6 log.go:172] (0xc001b4e2c0) (0xc000c4b360) Stream removed, broadcasting: 5 Jun 15 11:03:41.937: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:03:41.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-w7zvz" for this suite. Jun 15 11:04:05.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:04:05.972: INFO: namespace: e2e-tests-pod-network-test-w7zvz, resource: bindings, ignored listing per whitelist Jun 15 11:04:06.027: INFO: namespace e2e-tests-pod-network-test-w7zvz deletion completed in 24.085913433s • [SLOW TEST:52.605 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:04:06.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 15 11:04:10.705: INFO: Successfully updated pod "annotationupdateee02e0fa-aef7-11ea-99db-0242ac11001b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:04:12.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9dz2f" for this suite. Jun 15 11:04:34.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:04:34.805: INFO: namespace: e2e-tests-downward-api-9dz2f, resource: bindings, ignored listing per whitelist Jun 15 11:04:34.819: INFO: namespace e2e-tests-downward-api-9dz2f deletion completed in 22.094306994s • [SLOW TEST:28.792 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:04:34.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:05:34.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s6bks" for this suite. Jun 15 11:05:57.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:05:57.383: INFO: namespace: e2e-tests-container-probe-s6bks, resource: bindings, ignored listing per whitelist Jun 15 11:05:57.436: INFO: namespace e2e-tests-container-probe-s6bks deletion completed in 22.466440583s • [SLOW TEST:82.616 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:05:57.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Jun 15 11:05:57.581: INFO: Waiting up to 5m0s for pod "var-expansion-30680e16-aef8-11ea-99db-0242ac11001b" in namespace "e2e-tests-var-expansion-s4464" to be "success or failure" Jun 15 11:05:57.584: INFO: Pod "var-expansion-30680e16-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589631ms Jun 15 11:05:59.588: INFO: Pod "var-expansion-30680e16-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006916543s Jun 15 11:06:01.593: INFO: Pod "var-expansion-30680e16-aef8-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011774271s STEP: Saw pod success Jun 15 11:06:01.593: INFO: Pod "var-expansion-30680e16-aef8-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:06:01.596: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-30680e16-aef8-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 11:06:01.624: INFO: Waiting for pod var-expansion-30680e16-aef8-11ea-99db-0242ac11001b to disappear Jun 15 11:06:01.689: INFO: Pod var-expansion-30680e16-aef8-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:06:01.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-s4464" for this suite. Jun 15 11:06:07.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:06:07.758: INFO: namespace: e2e-tests-var-expansion-s4464, resource: bindings, ignored listing per whitelist Jun 15 11:06:07.809: INFO: namespace e2e-tests-var-expansion-s4464 deletion completed in 6.116074611s • [SLOW TEST:10.373 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:06:07.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 15 11:06:07.923: INFO: Waiting up to 5m0s for pod "downward-api-369681dd-aef8-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-7wcnm" to be "success or failure" Jun 15 11:06:07.942: INFO: Pod "downward-api-369681dd-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.734699ms Jun 15 11:06:09.946: INFO: Pod "downward-api-369681dd-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02273882s Jun 15 11:06:11.950: INFO: Pod "downward-api-369681dd-aef8-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026773841s STEP: Saw pod success Jun 15 11:06:11.950: INFO: Pod "downward-api-369681dd-aef8-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:06:11.953: INFO: Trying to get logs from node hunter-worker2 pod downward-api-369681dd-aef8-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 11:06:12.024: INFO: Waiting for pod downward-api-369681dd-aef8-11ea-99db-0242ac11001b to disappear Jun 15 11:06:12.061: INFO: Pod downward-api-369681dd-aef8-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:06:12.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7wcnm" for this suite. Jun 15 11:06:18.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:06:18.145: INFO: namespace: e2e-tests-downward-api-7wcnm, resource: bindings, ignored listing per whitelist Jun 15 11:06:18.162: INFO: namespace e2e-tests-downward-api-7wcnm deletion completed in 6.097590653s • [SLOW TEST:10.353 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:06:18.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3cbc2cf2-aef8-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:06:18.267: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-8zltq" to be "success or failure" Jun 15 11:06:18.270: INFO: Pod "pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.087895ms Jun 15 11:06:20.274: INFO: Pod "pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007408339s Jun 15 11:06:22.278: INFO: Pod "pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010877293s STEP: Saw pod success Jun 15 11:06:22.278: INFO: Pod "pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:06:22.280: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 15 11:06:22.399: INFO: Waiting for pod pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b to disappear Jun 15 11:06:22.519: INFO: Pod pod-projected-secrets-3cbea08a-aef8-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:06:22.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8zltq" for this suite. Jun 15 11:06:28.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:06:28.643: INFO: namespace: e2e-tests-projected-8zltq, resource: bindings, ignored listing per whitelist Jun 15 11:06:28.651: INFO: namespace e2e-tests-projected-8zltq deletion completed in 6.128655798s • [SLOW TEST:10.489 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:06:28.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vbbql [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vbbql STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vbbql Jun 15 11:06:28.822: INFO: Found 0 stateful pods, waiting for 1 Jun 15 11:06:38.826: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 15 11:06:38.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:06:39.110: INFO: stderr: "I0615 11:06:38.956122 346 log.go:172] (0xc000138840) (0xc00065b400) Create stream\nI0615 11:06:38.956189 346 log.go:172] (0xc000138840) (0xc00065b400) Stream added, broadcasting: 1\nI0615 11:06:38.959472 346 log.go:172] (0xc000138840) Reply frame received for 1\nI0615 11:06:38.959506 346 log.go:172] (0xc000138840) (0xc00065b4a0) Create stream\nI0615 11:06:38.959516 346 log.go:172] (0xc000138840) (0xc00065b4a0) Stream added, broadcasting: 3\nI0615 11:06:38.960436 346 log.go:172] (0xc000138840) Reply frame received for 3\nI0615 11:06:38.960475 346 log.go:172] (0xc000138840) (0xc0007be000) Create stream\nI0615 11:06:38.960486 346 log.go:172] (0xc000138840) (0xc0007be000) Stream added, broadcasting: 5\nI0615 11:06:38.961702 346 log.go:172] (0xc000138840) Reply frame received for 5\nI0615 11:06:39.102349 346 log.go:172] (0xc000138840) Data frame received for 3\nI0615 11:06:39.102388 346 log.go:172] (0xc00065b4a0) (3) Data frame handling\nI0615 11:06:39.102410 346 log.go:172] (0xc00065b4a0) (3) Data frame sent\nI0615 11:06:39.102934 346 log.go:172] (0xc000138840) Data frame received for 3\nI0615 11:06:39.102964 346 log.go:172] (0xc00065b4a0) (3) Data frame handling\nI0615 11:06:39.102996 346 log.go:172] (0xc000138840) Data frame received for 5\nI0615 11:06:39.103014 346 log.go:172] (0xc0007be000) (5) Data frame handling\nI0615 11:06:39.104595 346 log.go:172] (0xc000138840) Data frame received for 1\nI0615 11:06:39.104615 346 log.go:172] (0xc00065b400) (1) Data frame handling\nI0615 11:06:39.104638 346 log.go:172] (0xc00065b400) (1) Data frame sent\nI0615 11:06:39.104669 346 log.go:172] (0xc000138840) (0xc00065b400) Stream removed, broadcasting: 1\nI0615 11:06:39.104786 346 log.go:172] (0xc000138840) Go away received\nI0615 11:06:39.104944 346 log.go:172] (0xc000138840) (0xc00065b400) Stream removed, broadcasting: 1\nI0615 11:06:39.104980 346 log.go:172] (0xc000138840) (0xc00065b4a0) Stream removed, broadcasting: 3\nI0615 11:06:39.105001 346 log.go:172] (0xc000138840) (0xc0007be000) Stream removed, broadcasting: 5\n" Jun 15 11:06:39.110: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:06:39.110: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:06:39.121: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 15 11:06:49.124: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:06:49.124: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:06:49.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999974s Jun 15 11:06:50.160: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980645878s Jun 15 11:06:51.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.973556707s Jun 15 11:06:52.337: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969941684s Jun 15 11:06:53.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.796310957s Jun 15 11:06:54.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.791440858s Jun 15 11:06:55.350: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.787372542s Jun 15 11:06:56.355: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.783189407s Jun 15 11:06:57.359: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.778988765s Jun 15 11:06:58.364: INFO: Verifying statefulset ss doesn't scale past 1 for another 774.671957ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vbbql Jun 15 11:06:59.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:06:59.608: INFO: stderr: "I0615 11:06:59.497642 369 log.go:172] (0xc00016a840) (0xc00066f4a0) Create stream\nI0615 11:06:59.497732 369 log.go:172] (0xc00016a840) (0xc00066f4a0) Stream added, broadcasting: 1\nI0615 11:06:59.501039 369 log.go:172] (0xc00016a840) Reply frame received for 1\nI0615 11:06:59.501088 369 log.go:172] (0xc00016a840) (0xc000332000) Create stream\nI0615 11:06:59.501475 369 log.go:172] (0xc00016a840) (0xc000332000) Stream added, broadcasting: 3\nI0615 11:06:59.502448 369 log.go:172] (0xc00016a840) Reply frame received for 3\nI0615 11:06:59.502484 369 log.go:172] (0xc00016a840) (0xc0003320a0) Create stream\nI0615 11:06:59.502495 369 log.go:172] (0xc00016a840) (0xc0003320a0) Stream added, broadcasting: 5\nI0615 11:06:59.503450 369 log.go:172] (0xc00016a840) Reply frame received for 5\nI0615 11:06:59.601666 369 log.go:172] (0xc00016a840) Data frame received for 3\nI0615 11:06:59.601719 369 log.go:172] (0xc000332000) (3) Data frame handling\nI0615 11:06:59.601732 369 log.go:172] (0xc000332000) (3) Data frame sent\nI0615 11:06:59.601752 369 log.go:172] (0xc00016a840) Data frame received for 3\nI0615 11:06:59.601769 369 log.go:172] (0xc000332000) (3) Data frame handling\nI0615 11:06:59.601826 369 log.go:172] (0xc00016a840) Data frame received for 5\nI0615 11:06:59.601863 369 log.go:172] (0xc0003320a0) (5) Data frame handling\nI0615 11:06:59.603505 369 log.go:172] (0xc00016a840) Data frame received for 1\nI0615 11:06:59.603550 369 log.go:172] (0xc00066f4a0) (1) Data frame handling\nI0615 11:06:59.603594 369 log.go:172] (0xc00066f4a0) (1) Data frame sent\nI0615 11:06:59.603635 369 log.go:172] (0xc00016a840) (0xc00066f4a0) Stream removed, broadcasting: 1\nI0615 11:06:59.603695 369 log.go:172] (0xc00016a840) Go away received\nI0615 11:06:59.603873 369 log.go:172] (0xc00016a840) (0xc00066f4a0) Stream removed, broadcasting: 1\nI0615 11:06:59.603900 369 log.go:172] (0xc00016a840) (0xc000332000) Stream removed, broadcasting: 3\nI0615 11:06:59.603908 369 log.go:172] (0xc00016a840) (0xc0003320a0) Stream removed, broadcasting: 5\n" Jun 15 11:06:59.608: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:06:59.608: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:06:59.612: INFO: Found 1 stateful pods, waiting for 3 Jun 15 11:07:09.617: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 11:07:09.617: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 11:07:09.617: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 15 11:07:09.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:07:09.847: INFO: stderr: "I0615 11:07:09.750141 392 log.go:172] (0xc0007144d0) (0xc00061e640) Create stream\nI0615 11:07:09.750211 392 log.go:172] (0xc0007144d0) (0xc00061e640) Stream added, broadcasting: 1\nI0615 11:07:09.753462 392 log.go:172] (0xc0007144d0) Reply frame received for 1\nI0615 11:07:09.753507 392 log.go:172] (0xc0007144d0) (0xc0003cad20) Create stream\nI0615 11:07:09.753535 392 log.go:172] (0xc0007144d0) (0xc0003cad20) Stream added, broadcasting: 3\nI0615 11:07:09.754716 392 log.go:172] (0xc0007144d0) Reply frame received for 3\nI0615 11:07:09.754773 392 log.go:172] (0xc0007144d0) (0xc000412000) Create stream\nI0615 11:07:09.754790 392 log.go:172] (0xc0007144d0) (0xc000412000) Stream added, broadcasting: 5\nI0615 11:07:09.756016 392 log.go:172] (0xc0007144d0) Reply frame received for 5\nI0615 11:07:09.839984 392 log.go:172] (0xc0007144d0) Data frame received for 5\nI0615 11:07:09.840028 392 log.go:172] (0xc000412000) (5) Data frame handling\nI0615 11:07:09.840085 392 log.go:172] (0xc0007144d0) Data frame received for 3\nI0615 11:07:09.840149 392 log.go:172] (0xc0003cad20) (3) Data frame handling\nI0615 11:07:09.840184 392 log.go:172] (0xc0003cad20) (3) Data frame sent\nI0615 11:07:09.840210 392 log.go:172] (0xc0007144d0) Data frame received for 3\nI0615 11:07:09.840223 392 log.go:172] (0xc0003cad20) (3) Data frame handling\nI0615 11:07:09.841914 392 log.go:172] (0xc0007144d0) Data frame received for 1\nI0615 11:07:09.841936 392 log.go:172] (0xc00061e640) (1) Data frame handling\nI0615 11:07:09.841946 392 log.go:172] (0xc00061e640) (1) Data frame sent\nI0615 11:07:09.841969 392 log.go:172] (0xc0007144d0) (0xc00061e640) Stream removed, broadcasting: 1\nI0615 11:07:09.842056 392 log.go:172] (0xc0007144d0) Go away received\nI0615 11:07:09.842151 392 log.go:172] (0xc0007144d0) (0xc00061e640) Stream removed, broadcasting: 1\nI0615 11:07:09.842167 392 log.go:172] (0xc0007144d0) (0xc0003cad20) Stream removed, broadcasting: 3\nI0615 11:07:09.842175 392 log.go:172] (0xc0007144d0) (0xc000412000) Stream removed, broadcasting: 5\n" Jun 15 11:07:09.848: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:07:09.848: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:07:09.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:07:10.120: INFO: stderr: "I0615 11:07:09.972174 415 log.go:172] (0xc00082e2c0) (0xc000720640) Create stream\nI0615 11:07:09.972234 415 log.go:172] (0xc00082e2c0) (0xc000720640) Stream added, broadcasting: 1\nI0615 11:07:09.975233 415 log.go:172] (0xc00082e2c0) Reply frame received for 1\nI0615 11:07:09.975296 415 log.go:172] (0xc00082e2c0) (0xc0005c4c80) Create stream\nI0615 11:07:09.975316 415 log.go:172] (0xc00082e2c0) (0xc0005c4c80) Stream added, broadcasting: 3\nI0615 11:07:09.976413 415 log.go:172] (0xc00082e2c0) Reply frame received for 3\nI0615 11:07:09.976439 415 log.go:172] (0xc00082e2c0) (0xc0007206e0) Create stream\nI0615 11:07:09.976448 415 log.go:172] (0xc00082e2c0) (0xc0007206e0) Stream added, broadcasting: 5\nI0615 11:07:09.977996 415 log.go:172] (0xc00082e2c0) Reply frame received for 5\nI0615 11:07:10.110419 415 log.go:172] (0xc00082e2c0) Data frame received for 3\nI0615 11:07:10.110459 415 log.go:172] (0xc0005c4c80) (3) Data frame handling\nI0615 11:07:10.110482 415 log.go:172] (0xc0005c4c80) (3) Data frame sent\nI0615 11:07:10.110499 415 log.go:172] (0xc00082e2c0) Data frame received for 3\nI0615 11:07:10.110522 415 log.go:172] (0xc0005c4c80) (3) Data frame handling\nI0615 11:07:10.110552 415 log.go:172] (0xc00082e2c0) Data frame received for 5\nI0615 11:07:10.110579 415 log.go:172] (0xc0007206e0) (5) Data frame handling\nI0615 11:07:10.112840 415 log.go:172] (0xc00082e2c0) Data frame received for 1\nI0615 11:07:10.112868 415 log.go:172] (0xc000720640) (1) Data frame handling\nI0615 11:07:10.112895 415 log.go:172] (0xc000720640) (1) Data frame sent\nI0615 11:07:10.112915 415 log.go:172] (0xc00082e2c0) (0xc000720640) Stream removed, broadcasting: 1\nI0615 11:07:10.112950 415 log.go:172] (0xc00082e2c0) Go away received\nI0615 11:07:10.113229 415 log.go:172] (0xc00082e2c0) (0xc000720640) Stream removed, broadcasting: 1\nI0615 11:07:10.113251 415 log.go:172] (0xc00082e2c0) (0xc0005c4c80) Stream removed, broadcasting: 3\nI0615 11:07:10.113257 415 log.go:172] (0xc00082e2c0) (0xc0007206e0) Stream removed, broadcasting: 5\n" Jun 15 11:07:10.120: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:07:10.120: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:07:10.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:07:10.364: INFO: stderr: "I0615 11:07:10.251105 436 log.go:172] (0xc000724370) (0xc000764640) Create stream\nI0615 11:07:10.251160 436 log.go:172] (0xc000724370) (0xc000764640) Stream added, broadcasting: 1\nI0615 11:07:10.253389 436 log.go:172] (0xc000724370) Reply frame received for 1\nI0615 11:07:10.253429 436 log.go:172] (0xc000724370) (0xc0007646e0) Create stream\nI0615 11:07:10.253439 436 log.go:172] (0xc000724370) (0xc0007646e0) Stream added, broadcasting: 3\nI0615 11:07:10.254173 436 log.go:172] (0xc000724370) Reply frame received for 3\nI0615 11:07:10.254207 436 log.go:172] (0xc000724370) (0xc0005dec80) Create stream\nI0615 11:07:10.254219 436 log.go:172] (0xc000724370) (0xc0005dec80) Stream added, broadcasting: 5\nI0615 11:07:10.254968 436 log.go:172] (0xc000724370) Reply frame received for 5\nI0615 11:07:10.356282 436 log.go:172] (0xc000724370) Data frame received for 3\nI0615 11:07:10.356325 436 log.go:172] (0xc0007646e0) (3) Data frame handling\nI0615 11:07:10.356528 436 log.go:172] (0xc0007646e0) (3) Data frame sent\nI0615 11:07:10.356607 436 log.go:172] (0xc000724370) Data frame received for 3\nI0615 11:07:10.356630 436 log.go:172] (0xc0007646e0) (3) Data frame handling\nI0615 11:07:10.356667 436 log.go:172] (0xc000724370) Data frame received for 5\nI0615 11:07:10.356720 436 log.go:172] (0xc0005dec80) (5) Data frame handling\nI0615 11:07:10.359062 436 log.go:172] (0xc000724370) Data frame received for 1\nI0615 11:07:10.359080 436 log.go:172] (0xc000764640) (1) Data frame handling\nI0615 11:07:10.359094 436 log.go:172] (0xc000764640) (1) Data frame sent\nI0615 11:07:10.359108 436 log.go:172] (0xc000724370) (0xc000764640) Stream removed, broadcasting: 1\nI0615 11:07:10.359225 436 log.go:172] (0xc000724370) Go away received\nI0615 11:07:10.359322 436 log.go:172] (0xc000724370) (0xc000764640) Stream removed, broadcasting: 1\nI0615 11:07:10.359350 436 log.go:172] (0xc000724370) (0xc0007646e0) Stream removed, broadcasting: 3\nI0615 11:07:10.359369 436 log.go:172] (0xc000724370) (0xc0005dec80) Stream removed, broadcasting: 5\n" Jun 15 11:07:10.365: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:07:10.365: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:07:10.365: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:07:10.398: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 15 11:07:20.406: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:07:20.406: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:07:20.406: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:07:20.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999574s Jun 15 11:07:21.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989444649s Jun 15 11:07:22.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983788493s Jun 15 11:07:23.438: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979047266s Jun 15 11:07:24.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973551509s Jun 15 11:07:25.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.96840936s Jun 15 11:07:26.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963269308s Jun 15 11:07:27.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.958019781s Jun 15 11:07:28.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953710036s Jun 15 11:07:29.469: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.181011ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vbbql Jun 15 11:07:30.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:07:30.729: INFO: stderr: "I0615 11:07:30.641291 459 log.go:172] (0xc0008442c0) (0xc000732640) Create stream\nI0615 11:07:30.641353 459 log.go:172] (0xc0008442c0) (0xc000732640) Stream added, broadcasting: 1\nI0615 11:07:30.644079 459 log.go:172] (0xc0008442c0) Reply frame received for 1\nI0615 11:07:30.644144 459 log.go:172] (0xc0008442c0) (0xc000120e60) Create stream\nI0615 11:07:30.644160 459 log.go:172] (0xc0008442c0) (0xc000120e60) Stream added, broadcasting: 3\nI0615 11:07:30.645473 459 log.go:172] (0xc0008442c0) Reply frame received for 3\nI0615 11:07:30.645519 459 log.go:172] (0xc0008442c0) (0xc000312000) Create stream\nI0615 11:07:30.645540 459 log.go:172] (0xc0008442c0) (0xc000312000) Stream added, broadcasting: 5\nI0615 11:07:30.646471 459 log.go:172] (0xc0008442c0) Reply frame received for 5\nI0615 11:07:30.719616 459 log.go:172] (0xc0008442c0) Data frame received for 5\nI0615 11:07:30.719664 459 log.go:172] (0xc000312000) (5) Data frame handling\nI0615 11:07:30.719695 459 log.go:172] (0xc0008442c0) Data frame received for 3\nI0615 11:07:30.719725 459 log.go:172] (0xc000120e60) (3) Data frame handling\nI0615 11:07:30.719753 459 log.go:172] (0xc000120e60) (3) Data frame sent\nI0615 11:07:30.719771 459 log.go:172] (0xc0008442c0) Data frame received for 3\nI0615 11:07:30.719793 459 log.go:172] (0xc000120e60) (3) Data frame handling\nI0615 11:07:30.721833 459 log.go:172] (0xc0008442c0) Data frame received for 1\nI0615 11:07:30.721866 459 log.go:172] (0xc000732640) (1) Data frame handling\nI0615 11:07:30.721896 459 log.go:172] (0xc000732640) (1) Data frame sent\nI0615 11:07:30.721918 459 log.go:172] (0xc0008442c0) (0xc000732640) Stream removed, broadcasting: 1\nI0615 11:07:30.722180 459 log.go:172] (0xc0008442c0) (0xc000732640) Stream removed, broadcasting: 1\nI0615 11:07:30.722207 459 log.go:172] (0xc0008442c0) (0xc000120e60) Stream removed, broadcasting: 3\nI0615 11:07:30.722750 459 log.go:172] (0xc0008442c0) (0xc000312000) Stream removed, broadcasting: 5\n" Jun 15 11:07:30.729: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:07:30.729: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:07:30.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:07:31.100: INFO: stderr: "I0615 11:07:31.030326 482 log.go:172] (0xc00013a840) (0xc000744640) Create stream\nI0615 11:07:31.030399 482 log.go:172] (0xc00013a840) (0xc000744640) Stream added, broadcasting: 1\nI0615 11:07:31.033335 482 log.go:172] (0xc00013a840) Reply frame received for 1\nI0615 11:07:31.033391 482 log.go:172] (0xc00013a840) (0xc0007446e0) Create stream\nI0615 11:07:31.033410 482 log.go:172] (0xc00013a840) (0xc0007446e0) Stream added, broadcasting: 3\nI0615 11:07:31.034661 482 log.go:172] (0xc00013a840) Reply frame received for 3\nI0615 11:07:31.034725 482 log.go:172] (0xc00013a840) (0xc0005fad20) Create stream\nI0615 11:07:31.034749 482 log.go:172] (0xc00013a840) (0xc0005fad20) Stream added, broadcasting: 5\nI0615 11:07:31.035854 482 log.go:172] (0xc00013a840) Reply frame received for 5\nI0615 11:07:31.096324 482 log.go:172] (0xc00013a840) Data frame received for 5\nI0615 11:07:31.096352 482 log.go:172] (0xc0005fad20) (5) Data frame handling\nI0615 11:07:31.096372 482 log.go:172] (0xc00013a840) Data frame received for 3\nI0615 11:07:31.096378 482 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0615 11:07:31.096386 482 log.go:172] (0xc0007446e0) (3) Data frame sent\nI0615 11:07:31.096392 482 log.go:172] (0xc00013a840) Data frame received for 3\nI0615 11:07:31.096397 482 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0615 11:07:31.097397 482 log.go:172] (0xc00013a840) Data frame received for 1\nI0615 11:07:31.097415 482 log.go:172] (0xc000744640) (1) Data frame handling\nI0615 11:07:31.097434 482 log.go:172] (0xc000744640) (1) Data frame sent\nI0615 11:07:31.097451 482 log.go:172] (0xc00013a840) (0xc000744640) Stream removed, broadcasting: 1\nI0615 11:07:31.097481 482 log.go:172] (0xc00013a840) Go away received\nI0615 11:07:31.097668 482 log.go:172] (0xc00013a840) (0xc000744640) Stream removed, broadcasting: 1\nI0615 11:07:31.097687 482 log.go:172] (0xc00013a840) (0xc0007446e0) Stream removed, broadcasting: 3\nI0615 11:07:31.097692 482 log.go:172] (0xc00013a840) (0xc0005fad20) Stream removed, broadcasting: 5\n" Jun 15 11:07:31.101: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:07:31.101: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:07:31.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vbbql ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:07:31.304: INFO: stderr: "I0615 11:07:31.229586 503 log.go:172] (0xc0008282c0) (0xc000669360) Create stream\nI0615 11:07:31.229648 503 log.go:172] (0xc0008282c0) (0xc000669360) Stream added, broadcasting: 1\nI0615 11:07:31.232219 503 log.go:172] (0xc0008282c0) Reply frame received for 1\nI0615 11:07:31.232246 503 log.go:172] (0xc0008282c0) (0xc000522000) Create stream\nI0615 11:07:31.232253 503 log.go:172] (0xc0008282c0) (0xc000522000) Stream added, broadcasting: 3\nI0615 11:07:31.232877 503 log.go:172] (0xc0008282c0) Reply frame received for 3\nI0615 11:07:31.232902 503 log.go:172] (0xc0008282c0) (0xc000669400) Create stream\nI0615 11:07:31.232908 503 log.go:172] (0xc0008282c0) (0xc000669400) Stream added, broadcasting: 5\nI0615 11:07:31.233892 503 log.go:172] (0xc0008282c0) Reply frame received for 5\nI0615 11:07:31.295533 503 log.go:172] (0xc0008282c0) Data frame received for 3\nI0615 11:07:31.295554 503 log.go:172] (0xc000522000) (3) Data frame handling\nI0615 11:07:31.295562 503 log.go:172] (0xc000522000) (3) Data frame sent\nI0615 11:07:31.295568 503 log.go:172] (0xc0008282c0) Data frame received for 3\nI0615 11:07:31.295572 503 log.go:172] (0xc000522000) (3) Data frame handling\nI0615 11:07:31.295676 503 log.go:172] (0xc0008282c0) Data frame received for 5\nI0615 11:07:31.295735 503 log.go:172] (0xc000669400) (5) Data frame handling\nI0615 11:07:31.297002 503 log.go:172] (0xc0008282c0) Data frame received for 1\nI0615 11:07:31.297029 503 log.go:172] (0xc000669360) (1) Data frame handling\nI0615 11:07:31.297049 503 log.go:172] (0xc000669360) (1) Data frame sent\nI0615 11:07:31.297070 503 log.go:172] (0xc0008282c0) (0xc000669360) Stream removed, broadcasting: 1\nI0615 11:07:31.297101 503 log.go:172] (0xc0008282c0) Go away received\nI0615 11:07:31.297521 503 log.go:172] (0xc0008282c0) (0xc000669360) Stream removed, broadcasting: 1\nI0615 11:07:31.297559 503 log.go:172] (0xc0008282c0) (0xc000522000) Stream removed, broadcasting: 3\nI0615 11:07:31.297581 503 log.go:172] (0xc0008282c0) (0xc000669400) Stream removed, broadcasting: 5\n" Jun 15 11:07:31.304: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:07:31.304: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:07:31.305: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 15 11:08:01.410: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vbbql Jun 15 11:08:01.413: INFO: Scaling statefulset ss to 0 Jun 15 11:08:01.423: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:08:01.426: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:08:01.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vbbql" for this suite. Jun 15 11:08:07.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:08:07.505: INFO: namespace: e2e-tests-statefulset-vbbql, resource: bindings, ignored listing per whitelist Jun 15 11:08:07.570: INFO: namespace e2e-tests-statefulset-vbbql deletion completed in 6.087167457s • [SLOW TEST:98.919 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:08:07.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 15 11:08:15.747: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:15.782: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:17.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:17.848: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:19.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:19.865: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:21.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:21.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:23.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:23.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:25.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:25.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:27.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:27.787: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:29.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:29.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:31.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:31.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:33.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:33.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:35.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:35.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:37.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:37.786: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:39.783: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:39.812: INFO: Pod pod-with-poststart-exec-hook still exists Jun 15 11:08:41.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 15 11:08:41.787: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:08:41.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-jhk72" for this suite. Jun 15 11:09:05.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:09:05.810: INFO: namespace: e2e-tests-container-lifecycle-hook-jhk72, resource: bindings, ignored listing per whitelist Jun 15 11:09:05.877: INFO: namespace e2e-tests-container-lifecycle-hook-jhk72 deletion completed in 24.086475949s • [SLOW TEST:58.306 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:09:05.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lh2b9 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 15 11:09:06.056: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 15 11:09:36.944: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.207 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lh2b9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:09:36.944: INFO: >>> kubeConfig: /root/.kube/config I0615 11:09:37.016829 6 log.go:172] (0xc000c99600) (0xc001addae0) Create stream I0615 11:09:37.016869 6 log.go:172] (0xc000c99600) (0xc001addae0) Stream added, broadcasting: 1 I0615 11:09:37.019139 6 log.go:172] (0xc000c99600) Reply frame received for 1 I0615 11:09:37.019173 6 log.go:172] (0xc000c99600) (0xc0008d0c80) Create stream I0615 11:09:37.019189 6 log.go:172] (0xc000c99600) (0xc0008d0c80) Stream added, broadcasting: 3 I0615 11:09:37.020380 6 log.go:172] (0xc000c99600) Reply frame received for 3 I0615 11:09:37.020419 6 log.go:172] (0xc000c99600) (0xc001d1f180) Create stream I0615 11:09:37.020434 6 log.go:172] (0xc000c99600) (0xc001d1f180) Stream added, broadcasting: 5 I0615 11:09:37.022007 6 log.go:172] (0xc000c99600) Reply frame received for 5 I0615 11:09:38.226783 6 log.go:172] (0xc000c99600) Data frame received for 3 I0615 11:09:38.226810 6 log.go:172] (0xc0008d0c80) (3) Data frame handling I0615 11:09:38.226826 6 log.go:172] (0xc0008d0c80) (3) Data frame sent I0615 11:09:38.227906 6 log.go:172] (0xc000c99600) Data frame received for 3 I0615 11:09:38.227937 6 log.go:172] (0xc0008d0c80) (3) Data frame handling I0615 11:09:38.228107 6 log.go:172] (0xc000c99600) Data frame received for 5 I0615 11:09:38.228126 6 log.go:172] (0xc001d1f180) (5) Data frame handling I0615 11:09:38.229427 6 log.go:172] (0xc000c99600) Data frame received for 1 I0615 11:09:38.229443 6 log.go:172] (0xc001addae0) (1) Data frame handling I0615 11:09:38.229459 6 log.go:172] (0xc001addae0) (1) Data frame sent I0615 11:09:38.229621 6 log.go:172] (0xc000c99600) (0xc001addae0) Stream removed, broadcasting: 1 I0615 11:09:38.229742 6 log.go:172] (0xc000c99600) Go away received I0615 11:09:38.229788 6 log.go:172] (0xc000c99600) (0xc001addae0) Stream removed, broadcasting: 1 I0615 11:09:38.229819 6 log.go:172] (0xc000c99600) (0xc0008d0c80) Stream removed, broadcasting: 3 I0615 11:09:38.229829 6 log.go:172] (0xc000c99600) (0xc001d1f180) Stream removed, broadcasting: 5 Jun 15 11:09:38.229: INFO: Found all expected endpoints: [netserver-0] Jun 15 11:09:38.232: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.248 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lh2b9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:09:38.232: INFO: >>> kubeConfig: /root/.kube/config I0615 11:09:38.264634 6 log.go:172] (0xc001c202c0) (0xc0008d0f00) Create stream I0615 11:09:38.264670 6 log.go:172] (0xc001c202c0) (0xc0008d0f00) Stream added, broadcasting: 1 I0615 11:09:38.267013 6 log.go:172] (0xc001c202c0) Reply frame received for 1 I0615 11:09:38.267046 6 log.go:172] (0xc001c202c0) (0xc001addb80) Create stream I0615 11:09:38.267055 6 log.go:172] (0xc001c202c0) (0xc001addb80) Stream added, broadcasting: 3 I0615 11:09:38.267896 6 log.go:172] (0xc001c202c0) Reply frame received for 3 I0615 11:09:38.267941 6 log.go:172] (0xc001c202c0) (0xc001d1f2c0) Create stream I0615 11:09:38.267951 6 log.go:172] (0xc001c202c0) (0xc001d1f2c0) Stream added, broadcasting: 5 I0615 11:09:38.268664 6 log.go:172] (0xc001c202c0) Reply frame received for 5 I0615 11:09:39.333395 6 log.go:172] (0xc001c202c0) Data frame received for 3 I0615 11:09:39.333430 6 log.go:172] (0xc001addb80) (3) Data frame handling I0615 11:09:39.333442 6 log.go:172] (0xc001addb80) (3) Data frame sent I0615 11:09:39.333840 6 log.go:172] (0xc001c202c0) Data frame received for 3 I0615 11:09:39.333879 6 log.go:172] (0xc001addb80) (3) Data frame handling I0615 11:09:39.333910 6 log.go:172] (0xc001c202c0) Data frame received for 5 I0615 11:09:39.333922 6 log.go:172] (0xc001d1f2c0) (5) Data frame handling I0615 11:09:39.335684 6 log.go:172] (0xc001c202c0) Data frame received for 1 I0615 11:09:39.335702 6 log.go:172] (0xc0008d0f00) (1) Data frame handling I0615 11:09:39.335714 6 log.go:172] (0xc0008d0f00) (1) Data frame sent I0615 11:09:39.335731 6 log.go:172] (0xc001c202c0) (0xc0008d0f00) Stream removed, broadcasting: 1 I0615 11:09:39.335749 6 log.go:172] (0xc001c202c0) Go away received I0615 11:09:39.335831 6 log.go:172] (0xc001c202c0) (0xc0008d0f00) Stream removed, broadcasting: 1 I0615 11:09:39.335865 6 log.go:172] (0xc001c202c0) (0xc001addb80) Stream removed, broadcasting: 3 I0615 11:09:39.335879 6 log.go:172] (0xc001c202c0) (0xc001d1f2c0) Stream removed, broadcasting: 5 Jun 15 11:09:39.335: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:09:39.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-lh2b9" for this suite. Jun 15 11:10:09.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:10:09.429: INFO: namespace: e2e-tests-pod-network-test-lh2b9, resource: bindings, ignored listing per whitelist Jun 15 11:10:09.439: INFO: namespace e2e-tests-pod-network-test-lh2b9 deletion completed in 30.098928392s • [SLOW TEST:63.562 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:10:09.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:10:09.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-76x2w" for this suite. Jun 15 11:10:15.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:10:15.870: INFO: namespace: e2e-tests-kubelet-test-76x2w, resource: bindings, ignored listing per whitelist Jun 15 11:10:15.950: INFO: namespace e2e-tests-kubelet-test-76x2w deletion completed in 6.178858317s • [SLOW TEST:6.511 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:10:15.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:10:16.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-dtc95" to be "success or failure" Jun 15 11:10:16.061: INFO: Pod "downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.222695ms Jun 15 11:10:18.100: INFO: Pod "downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042162221s Jun 15 11:10:20.106: INFO: Pod "downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047486359s STEP: Saw pod success Jun 15 11:10:20.106: INFO: Pod "downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:10:20.109: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:10:20.276: INFO: Waiting for pod downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b to disappear Jun 15 11:10:20.325: INFO: Pod downwardapi-volume-ca7b30ce-aef8-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:10:20.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dtc95" for this suite. Jun 15 11:10:26.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:10:26.448: INFO: namespace: e2e-tests-projected-dtc95, resource: bindings, ignored listing per whitelist Jun 15 11:10:26.498: INFO: namespace e2e-tests-projected-dtc95 deletion completed in 6.169660173s • [SLOW TEST:10.548 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:10:26.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 15 11:10:26.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2lppk,SelfLink:/api/v1/namespaces/e2e-tests-watch-2lppk/configmaps/e2e-watch-test-label-changed,UID:d0c6e546-aef8-11ea-99e8-0242ac110002,ResourceVersion:16067001,Generation:0,CreationTimestamp:2020-06-15 11:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 15 11:10:26.679: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2lppk,SelfLink:/api/v1/namespaces/e2e-tests-watch-2lppk/configmaps/e2e-watch-test-label-changed,UID:d0c6e546-aef8-11ea-99e8-0242ac110002,ResourceVersion:16067003,Generation:0,CreationTimestamp:2020-06-15 11:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 15 11:10:26.679: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2lppk,SelfLink:/api/v1/namespaces/e2e-tests-watch-2lppk/configmaps/e2e-watch-test-label-changed,UID:d0c6e546-aef8-11ea-99e8-0242ac110002,ResourceVersion:16067004,Generation:0,CreationTimestamp:2020-06-15 11:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 15 11:10:36.718: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2lppk,SelfLink:/api/v1/namespaces/e2e-tests-watch-2lppk/configmaps/e2e-watch-test-label-changed,UID:d0c6e546-aef8-11ea-99e8-0242ac110002,ResourceVersion:16067025,Generation:0,CreationTimestamp:2020-06-15 11:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 15 11:10:36.718: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2lppk,SelfLink:/api/v1/namespaces/e2e-tests-watch-2lppk/configmaps/e2e-watch-test-label-changed,UID:d0c6e546-aef8-11ea-99e8-0242ac110002,ResourceVersion:16067026,Generation:0,CreationTimestamp:2020-06-15 11:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 15 11:10:36.719: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-2lppk,SelfLink:/api/v1/namespaces/e2e-tests-watch-2lppk/configmaps/e2e-watch-test-label-changed,UID:d0c6e546-aef8-11ea-99e8-0242ac110002,ResourceVersion:16067027,Generation:0,CreationTimestamp:2020-06-15 11:10:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:10:36.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2lppk" for this suite. Jun 15 11:10:42.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:10:42.895: INFO: namespace: e2e-tests-watch-2lppk, resource: bindings, ignored listing per whitelist Jun 15 11:10:42.928: INFO: namespace e2e-tests-watch-2lppk deletion completed in 6.16731698s • [SLOW TEST:16.429 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:10:42.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-daadfb21-aef8-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 11:10:43.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-jgbsb" to be "success or failure" Jun 15 11:10:43.315: INFO: Pod "pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 81.246549ms Jun 15 11:10:45.319: INFO: Pod "pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085170979s Jun 15 11:10:47.323: INFO: Pod "pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089383428s STEP: Saw pod success Jun 15 11:10:47.323: INFO: Pod "pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:10:47.326: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 11:10:47.371: INFO: Waiting for pod pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b to disappear Jun 15 11:10:47.381: INFO: Pod pod-configmaps-daae76b2-aef8-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:10:47.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jgbsb" for this suite. Jun 15 11:10:53.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:10:53.505: INFO: namespace: e2e-tests-configmap-jgbsb, resource: bindings, ignored listing per whitelist Jun 15 11:10:53.505: INFO: namespace e2e-tests-configmap-jgbsb deletion completed in 6.121470672s • [SLOW TEST:10.577 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:10:53.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 11:10:53.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nqqxj' Jun 15 11:10:55.911: INFO: stderr: "" Jun 15 11:10:55.911: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 15 11:11:00.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nqqxj -o json' Jun 15 11:11:01.063: INFO: stderr: "" Jun 15 11:11:01.063: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-15T11:10:55Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-nqqxj\",\n \"resourceVersion\": \"16067109\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-nqqxj/pods/e2e-test-nginx-pod\",\n \"uid\": \"e23bf60e-aef8-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-z55xq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-z55xq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-z55xq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-15T11:10:55Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-15T11:10:58Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-15T11:10:58Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-15T11:10:55Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://9605e3eb70ceeac3567c8ff2d271ee09b1dd079dc98c038402c726c08444219e\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-15T11:10:58Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.208\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-15T11:10:55Z\"\n }\n}\n" STEP: replace the image in the pod Jun 15 11:11:01.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-nqqxj' Jun 15 11:11:01.342: INFO: stderr: "" Jun 15 11:11:01.342: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jun 15 11:11:01.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-nqqxj' Jun 15 11:11:04.607: INFO: stderr: "" Jun 15 11:11:04.607: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:11:04.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nqqxj" for this suite. Jun 15 11:11:10.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:11:10.661: INFO: namespace: e2e-tests-kubectl-nqqxj, resource: bindings, ignored listing per whitelist Jun 15 11:11:10.702: INFO: namespace e2e-tests-kubectl-nqqxj deletion completed in 6.091768482s • [SLOW TEST:17.197 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:11:10.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jun 15 11:11:10.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-mpfh8 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 15 11:11:14.127: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0615 11:11:14.061593 619 log.go:172] (0xc0001386e0) (0xc000738140) Create stream\nI0615 11:11:14.061674 619 log.go:172] (0xc0001386e0) (0xc000738140) Stream added, broadcasting: 1\nI0615 11:11:14.063846 619 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0615 11:11:14.063886 619 log.go:172] (0xc0001386e0) (0xc0005b0000) Create stream\nI0615 11:11:14.063898 619 log.go:172] (0xc0001386e0) (0xc0005b0000) Stream added, broadcasting: 3\nI0615 11:11:14.064516 619 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0615 11:11:14.064543 619 log.go:172] (0xc0001386e0) (0xc000748a00) Create stream\nI0615 11:11:14.064555 619 log.go:172] (0xc0001386e0) (0xc000748a00) Stream added, broadcasting: 5\nI0615 11:11:14.065505 619 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0615 11:11:14.065543 619 log.go:172] (0xc0001386e0) (0xc0005b00a0) Create stream\nI0615 11:11:14.065557 619 log.go:172] (0xc0001386e0) (0xc0005b00a0) Stream added, broadcasting: 7\nI0615 11:11:14.066219 619 log.go:172] (0xc0001386e0) Reply frame received for 7\nI0615 11:11:14.066400 619 log.go:172] (0xc0005b0000) (3) Writing data frame\nI0615 11:11:14.066514 619 log.go:172] (0xc0005b0000) (3) Writing data frame\nI0615 11:11:14.067257 619 log.go:172] (0xc0001386e0) Data frame received for 5\nI0615 11:11:14.067281 619 log.go:172] (0xc000748a00) (5) Data frame handling\nI0615 11:11:14.067297 619 log.go:172] (0xc000748a00) (5) Data frame sent\nI0615 11:11:14.067844 619 log.go:172] (0xc0001386e0) Data frame received for 5\nI0615 11:11:14.067855 619 log.go:172] (0xc000748a00) (5) Data frame handling\nI0615 11:11:14.067863 619 log.go:172] (0xc000748a00) (5) Data frame sent\nI0615 11:11:14.103406 619 log.go:172] (0xc0001386e0) Data frame received for 5\nI0615 11:11:14.103450 619 log.go:172] (0xc000748a00) (5) Data frame handling\nI0615 11:11:14.103480 619 log.go:172] (0xc0001386e0) Data frame received for 7\nI0615 11:11:14.103510 619 log.go:172] (0xc0005b00a0) (7) Data frame handling\nI0615 11:11:14.103841 619 log.go:172] (0xc0001386e0) Data frame received for 1\nI0615 11:11:14.103857 619 log.go:172] (0xc000738140) (1) Data frame handling\nI0615 11:11:14.103864 619 log.go:172] (0xc000738140) (1) Data frame sent\nI0615 11:11:14.103961 619 log.go:172] (0xc0001386e0) (0xc000738140) Stream removed, broadcasting: 1\nI0615 11:11:14.104030 619 log.go:172] (0xc0001386e0) (0xc0005b0000) Stream removed, broadcasting: 3\nI0615 11:11:14.104064 619 log.go:172] (0xc0001386e0) (0xc000738140) Stream removed, broadcasting: 1\nI0615 11:11:14.104072 619 log.go:172] (0xc0001386e0) (0xc0005b0000) Stream removed, broadcasting: 3\nI0615 11:11:14.104077 619 log.go:172] (0xc0001386e0) (0xc000748a00) Stream removed, broadcasting: 5\nI0615 11:11:14.104169 619 log.go:172] (0xc0001386e0) (0xc0005b00a0) Stream removed, broadcasting: 7\nI0615 11:11:14.104190 619 log.go:172] (0xc0001386e0) Go away received\n" Jun 15 11:11:14.128: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:11:16.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mpfh8" for this suite. Jun 15 11:11:22.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:11:22.230: INFO: namespace: e2e-tests-kubectl-mpfh8, resource: bindings, ignored listing per whitelist Jun 15 11:11:22.250: INFO: namespace e2e-tests-kubectl-mpfh8 deletion completed in 6.094589245s • [SLOW TEST:11.548 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:11:22.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 15 11:11:26.896: INFO: Successfully updated pod "pod-update-f1ff88a8-aef8-11ea-99db-0242ac11001b" STEP: verifying the updated pod is in kubernetes Jun 15 11:11:26.904: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:11:26.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wczqd" for this suite. Jun 15 11:11:48.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:11:48.926: INFO: namespace: e2e-tests-pods-wczqd, resource: bindings, ignored listing per whitelist Jun 15 11:11:48.991: INFO: namespace e2e-tests-pods-wczqd deletion completed in 22.082988641s • [SLOW TEST:26.741 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:11:48.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 15 11:11:57.163: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:11:57.191: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:11:59.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:11:59.196: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:01.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:01.195: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:03.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:03.196: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:05.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:05.196: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:07.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:07.196: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:09.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:09.196: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:11.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:11.196: INFO: Pod pod-with-poststart-http-hook still exists Jun 15 11:12:13.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 15 11:12:13.197: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:12:13.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4dzbz" for this suite. Jun 15 11:12:35.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:12:35.305: INFO: namespace: e2e-tests-container-lifecycle-hook-4dzbz, resource: bindings, ignored listing per whitelist Jun 15 11:12:35.334: INFO: namespace e2e-tests-container-lifecycle-hook-4dzbz deletion completed in 22.129983464s • [SLOW TEST:46.343 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:12:35.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 15 11:12:35.447: INFO: Waiting up to 5m0s for pod "downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-8hg7k" to be "success or failure" Jun 15 11:12:35.455: INFO: Pod "downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322232ms Jun 15 11:12:37.459: INFO: Pod "downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012478914s Jun 15 11:12:39.474: INFO: Pod "downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026896005s STEP: Saw pod success Jun 15 11:12:39.474: INFO: Pod "downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:12:39.477: INFO: Trying to get logs from node hunter-worker2 pod downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 11:12:39.513: INFO: Waiting for pod downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:12:39.516: INFO: Pod downward-api-1d8bfe83-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:12:39.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8hg7k" for this suite. Jun 15 11:12:45.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:12:45.555: INFO: namespace: e2e-tests-downward-api-8hg7k, resource: bindings, ignored listing per whitelist Jun 15 11:12:45.639: INFO: namespace e2e-tests-downward-api-8hg7k deletion completed in 6.11976284s • [SLOW TEST:10.305 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:12:45.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-23b9feae-aef9-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 11:12:45.782: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-sdr54" to be "success or failure" Jun 15 11:12:45.796: INFO: Pod "pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.899041ms Jun 15 11:12:47.800: INFO: Pod "pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017942174s Jun 15 11:12:49.819: INFO: Pod "pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036564129s STEP: Saw pod success Jun 15 11:12:49.819: INFO: Pod "pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:12:49.822: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 11:12:49.895: INFO: Waiting for pod pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:12:49.899: INFO: Pod pod-projected-configmaps-23ba9b05-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:12:49.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sdr54" for this suite. Jun 15 11:12:55.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:12:55.940: INFO: namespace: e2e-tests-projected-sdr54, resource: bindings, ignored listing per whitelist Jun 15 11:12:55.997: INFO: namespace e2e-tests-projected-sdr54 deletion completed in 6.094086288s • [SLOW TEST:10.358 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:12:55.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:12:56.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-65wrk" for this suite. Jun 15 11:13:18.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:13:18.291: INFO: namespace: e2e-tests-pods-65wrk, resource: bindings, ignored listing per whitelist Jun 15 11:13:18.296: INFO: namespace e2e-tests-pods-65wrk deletion completed in 22.131854215s • [SLOW TEST:22.297 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:13:18.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-bv8x5/configmap-test-3747338a-aef9-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 11:13:18.579: INFO: Waiting up to 5m0s for pod "pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-bv8x5" to be "success or failure" Jun 15 11:13:18.594: INFO: Pod "pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172686ms Jun 15 11:13:20.598: INFO: Pod "pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019175015s Jun 15 11:13:22.603: INFO: Pod "pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023513142s STEP: Saw pod success Jun 15 11:13:22.603: INFO: Pod "pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:13:22.607: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b container env-test: STEP: delete the pod Jun 15 11:13:22.676: INFO: Waiting for pod pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:13:22.703: INFO: Pod pod-configmaps-3747b092-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:13:22.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bv8x5" for this suite. Jun 15 11:13:28.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:13:28.779: INFO: namespace: e2e-tests-configmap-bv8x5, resource: bindings, ignored listing per whitelist Jun 15 11:13:28.802: INFO: namespace e2e-tests-configmap-bv8x5 deletion completed in 6.096303154s • [SLOW TEST:10.506 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:13:28.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-3d6fcc17-aef9-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:13:28.924: INFO: Waiting up to 5m0s for pod "pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-7pm2q" to be "success or failure" Jun 15 11:13:29.002: INFO: Pod "pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 77.522995ms Jun 15 11:13:31.006: INFO: Pod "pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081437182s Jun 15 11:13:33.010: INFO: Pod "pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085716186s STEP: Saw pod success Jun 15 11:13:33.010: INFO: Pod "pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:13:33.013: INFO: Trying to get logs from node hunter-worker pod pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 11:13:33.039: INFO: Waiting for pod pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:13:33.049: INFO: Pod pod-secrets-3d71ce07-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:13:33.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-7pm2q" for this suite. Jun 15 11:13:39.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:13:39.398: INFO: namespace: e2e-tests-secrets-7pm2q, resource: bindings, ignored listing per whitelist Jun 15 11:13:39.425: INFO: namespace e2e-tests-secrets-7pm2q deletion completed in 6.373070615s • [SLOW TEST:10.622 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:13:39.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 15 11:13:46.118: INFO: Successfully updated pod "labelsupdate43c604e9-aef9-11ea-99db-0242ac11001b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:13:48.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7k8wn" for this suite. Jun 15 11:14:10.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:14:10.234: INFO: namespace: e2e-tests-downward-api-7k8wn, resource: bindings, ignored listing per whitelist Jun 15 11:14:10.286: INFO: namespace e2e-tests-downward-api-7k8wn deletion completed in 22.090363419s • [SLOW TEST:30.861 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:14:10.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:14:10.426: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 15 11:14:15.431: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 15 11:14:15.431: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 15 11:14:17.435: INFO: Creating deployment "test-rollover-deployment" Jun 15 11:14:17.477: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 15 11:14:19.484: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 15 11:14:19.494: INFO: Ensure that both replica sets have 1 created replica Jun 15 11:14:19.499: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 15 11:14:19.577: INFO: Updating deployment test-rollover-deployment Jun 15 11:14:19.578: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 15 11:14:21.604: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 15 11:14:21.610: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 15 11:14:21.615: INFO: all replica sets need to contain the pod-template-hash label Jun 15 11:14:21.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816459, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:23.625: INFO: all replica sets need to contain the pod-template-hash label Jun 15 11:14:23.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816463, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:25.627: INFO: all replica sets need to contain the pod-template-hash label Jun 15 11:14:25.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816463, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:27.641: INFO: all replica sets need to contain the pod-template-hash label Jun 15 11:14:27.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816463, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:29.630: INFO: all replica sets need to contain the pod-template-hash label Jun 15 11:14:29.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816463, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:31.642: INFO: all replica sets need to contain the pod-template-hash label Jun 15 11:14:31.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816463, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:33.670: INFO: Jun 15 11:14:33.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816473, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727816457, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:14:35.624: INFO: Jun 15 11:14:35.624: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 15 11:14:35.633: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-nh42v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nh42v/deployments/test-rollover-deployment,UID:5a5d66a3-aef9-11ea-99e8-0242ac110002,ResourceVersion:16067868,Generation:2,CreationTimestamp:2020-06-15 11:14:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-15 11:14:17 +0000 UTC 2020-06-15 11:14:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-15 11:14:33 +0000 UTC 2020-06-15 11:14:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 15 11:14:35.637: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-nh42v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nh42v/replicasets/test-rollover-deployment-5b8479fdb6,UID:5ba44364-aef9-11ea-99e8-0242ac110002,ResourceVersion:16067858,Generation:2,CreationTimestamp:2020-06-15 11:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5a5d66a3-aef9-11ea-99e8-0242ac110002 0xc0016affd7 0xc0016affd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 15 11:14:35.637: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 15 11:14:35.637: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-nh42v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nh42v/replicasets/test-rollover-controller,UID:56258232-aef9-11ea-99e8-0242ac110002,ResourceVersion:16067867,Generation:2,CreationTimestamp:2020-06-15 11:14:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5a5d66a3-aef9-11ea-99e8-0242ac110002 0xc0016afe37 0xc0016afe38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 15 11:14:35.637: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-nh42v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nh42v/replicasets/test-rollover-deployment-58494b7559,UID:5a65bc25-aef9-11ea-99e8-0242ac110002,ResourceVersion:16067814,Generation:2,CreationTimestamp:2020-06-15 11:14:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 5a5d66a3-aef9-11ea-99e8-0242ac110002 0xc0016afef7 0xc0016afef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 15 11:14:35.641: INFO: Pod "test-rollover-deployment-5b8479fdb6-b4kqm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-b4kqm,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-nh42v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nh42v/pods/test-rollover-deployment-5b8479fdb6-b4kqm,UID:5bb87a9d-aef9-11ea-99e8-0242ac110002,ResourceVersion:16067836,Generation:0,CreationTimestamp:2020-06-15 11:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 5ba44364-aef9-11ea-99e8-0242ac110002 0xc000b54287 0xc000b54288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8zdx9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8zdx9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8zdx9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b54310} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b54330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:14:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:14:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:14:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:14:19 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.6,StartTime:2020-06-15 11:14:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-15 11:14:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e6cd7c1775d76c91286a67ed6263caa583c453cd6662c4f97f08a0ed0172d26d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:14:35.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nh42v" for this suite. Jun 15 11:14:43.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:14:43.725: INFO: namespace: e2e-tests-deployment-nh42v, resource: bindings, ignored listing per whitelist Jun 15 11:14:43.741: INFO: namespace e2e-tests-deployment-nh42v deletion completed in 8.095467991s • [SLOW TEST:33.455 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:14:43.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:15:07.851: INFO: Container started at 2020-06-15 11:14:46 +0000 UTC, pod became ready at 2020-06-15 11:15:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:15:07.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2n9bg" for this suite. Jun 15 11:15:29.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:15:29.920: INFO: namespace: e2e-tests-container-probe-2n9bg, resource: bindings, ignored listing per whitelist Jun 15 11:15:29.987: INFO: namespace e2e-tests-container-probe-2n9bg deletion completed in 22.13247244s • [SLOW TEST:46.246 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:15:29.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-4d9fn [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4d9fn STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4d9fn Jun 15 11:15:30.136: INFO: Found 0 stateful pods, waiting for 1 Jun 15 11:15:40.141: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 15 11:15:40.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:15:40.399: INFO: stderr: "I0615 11:15:40.276493 646 log.go:172] (0xc0001386e0) (0xc0007af360) Create stream\nI0615 11:15:40.276556 646 log.go:172] (0xc0001386e0) (0xc0007af360) Stream added, broadcasting: 1\nI0615 11:15:40.279185 646 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0615 11:15:40.279230 646 log.go:172] (0xc0001386e0) (0xc0007af400) Create stream\nI0615 11:15:40.279237 646 log.go:172] (0xc0001386e0) (0xc0007af400) Stream added, broadcasting: 3\nI0615 11:15:40.280231 646 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0615 11:15:40.280275 646 log.go:172] (0xc0001386e0) (0xc000702000) Create stream\nI0615 11:15:40.280428 646 log.go:172] (0xc0001386e0) (0xc000702000) Stream added, broadcasting: 5\nI0615 11:15:40.281687 646 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0615 11:15:40.390874 646 log.go:172] (0xc0001386e0) Data frame received for 3\nI0615 11:15:40.390919 646 log.go:172] (0xc0007af400) (3) Data frame handling\nI0615 11:15:40.390956 646 log.go:172] (0xc0007af400) (3) Data frame sent\nI0615 11:15:40.391348 646 log.go:172] (0xc0001386e0) Data frame received for 5\nI0615 11:15:40.391398 646 log.go:172] (0xc000702000) (5) Data frame handling\nI0615 11:15:40.391433 646 log.go:172] (0xc0001386e0) Data frame received for 3\nI0615 11:15:40.391452 646 log.go:172] (0xc0007af400) (3) Data frame handling\nI0615 11:15:40.393393 646 log.go:172] (0xc0001386e0) Data frame received for 1\nI0615 11:15:40.393414 646 log.go:172] (0xc0007af360) (1) Data frame handling\nI0615 11:15:40.393426 646 log.go:172] (0xc0007af360) (1) Data frame sent\nI0615 11:15:40.393439 646 log.go:172] (0xc0001386e0) (0xc0007af360) Stream removed, broadcasting: 1\nI0615 11:15:40.393680 646 log.go:172] (0xc0001386e0) Go away received\nI0615 11:15:40.393752 646 log.go:172] (0xc0001386e0) (0xc0007af360) Stream removed, broadcasting: 1\nI0615 11:15:40.393771 646 log.go:172] (0xc0001386e0) (0xc0007af400) Stream removed, broadcasting: 3\nI0615 11:15:40.393787 646 log.go:172] (0xc0001386e0) (0xc000702000) Stream removed, broadcasting: 5\n" Jun 15 11:15:40.400: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:15:40.400: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:15:40.403: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 15 11:15:50.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:15:50.408: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:15:50.531: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:15:50.531: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:15:50.532: INFO: Jun 15 11:15:50.532: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 15 11:15:51.539: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.885290574s Jun 15 11:15:52.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.878022927s Jun 15 11:15:53.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.80143384s Jun 15 11:15:54.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.793449826s Jun 15 11:15:55.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.787713227s Jun 15 11:15:56.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.784432906s Jun 15 11:15:57.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.779357395s Jun 15 11:15:58.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.774894579s Jun 15 11:15:59.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 769.33612ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4d9fn Jun 15 11:16:00.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:16:00.847: INFO: stderr: "I0615 11:16:00.786061 667 log.go:172] (0xc0002164d0) (0xc000708640) Create stream\nI0615 11:16:00.786124 667 log.go:172] (0xc0002164d0) (0xc000708640) Stream added, broadcasting: 1\nI0615 11:16:00.788090 667 log.go:172] (0xc0002164d0) Reply frame received for 1\nI0615 11:16:00.788122 667 log.go:172] (0xc0002164d0) (0xc0005b4d20) Create stream\nI0615 11:16:00.788129 667 log.go:172] (0xc0002164d0) (0xc0005b4d20) Stream added, broadcasting: 3\nI0615 11:16:00.788888 667 log.go:172] (0xc0002164d0) Reply frame received for 3\nI0615 11:16:00.788927 667 log.go:172] (0xc0002164d0) (0xc000654000) Create stream\nI0615 11:16:00.788940 667 log.go:172] (0xc0002164d0) (0xc000654000) Stream added, broadcasting: 5\nI0615 11:16:00.789788 667 log.go:172] (0xc0002164d0) Reply frame received for 5\nI0615 11:16:00.842009 667 log.go:172] (0xc0002164d0) Data frame received for 5\nI0615 11:16:00.842042 667 log.go:172] (0xc000654000) (5) Data frame handling\nI0615 11:16:00.842065 667 log.go:172] (0xc0002164d0) Data frame received for 3\nI0615 11:16:00.842073 667 log.go:172] (0xc0005b4d20) (3) Data frame handling\nI0615 11:16:00.842083 667 log.go:172] (0xc0005b4d20) (3) Data frame sent\nI0615 11:16:00.842091 667 log.go:172] (0xc0002164d0) Data frame received for 3\nI0615 11:16:00.842098 667 log.go:172] (0xc0005b4d20) (3) Data frame handling\nI0615 11:16:00.843100 667 log.go:172] (0xc0002164d0) Data frame received for 1\nI0615 11:16:00.843118 667 log.go:172] (0xc000708640) (1) Data frame handling\nI0615 11:16:00.843126 667 log.go:172] (0xc000708640) (1) Data frame sent\nI0615 11:16:00.843134 667 log.go:172] (0xc0002164d0) (0xc000708640) Stream removed, broadcasting: 1\nI0615 11:16:00.843146 667 log.go:172] (0xc0002164d0) Go away received\nI0615 11:16:00.843342 667 log.go:172] (0xc0002164d0) (0xc000708640) Stream removed, broadcasting: 1\nI0615 11:16:00.843368 667 log.go:172] (0xc0002164d0) (0xc0005b4d20) Stream removed, broadcasting: 3\nI0615 11:16:00.843389 667 log.go:172] (0xc0002164d0) (0xc000654000) Stream removed, broadcasting: 5\n" Jun 15 11:16:00.848: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:16:00.848: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:16:00.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:16:01.074: INFO: stderr: "I0615 11:16:00.991891 688 log.go:172] (0xc0008362c0) (0xc00073a640) Create stream\nI0615 11:16:00.991988 688 log.go:172] (0xc0008362c0) (0xc00073a640) Stream added, broadcasting: 1\nI0615 11:16:00.996239 688 log.go:172] (0xc0008362c0) Reply frame received for 1\nI0615 11:16:00.996281 688 log.go:172] (0xc0008362c0) (0xc0005e8e60) Create stream\nI0615 11:16:00.996308 688 log.go:172] (0xc0008362c0) (0xc0005e8e60) Stream added, broadcasting: 3\nI0615 11:16:00.998755 688 log.go:172] (0xc0008362c0) Reply frame received for 3\nI0615 11:16:00.998807 688 log.go:172] (0xc0008362c0) (0xc00051a000) Create stream\nI0615 11:16:00.998834 688 log.go:172] (0xc0008362c0) (0xc00051a000) Stream added, broadcasting: 5\nI0615 11:16:01.000849 688 log.go:172] (0xc0008362c0) Reply frame received for 5\nI0615 11:16:01.066330 688 log.go:172] (0xc0008362c0) Data frame received for 3\nI0615 11:16:01.066373 688 log.go:172] (0xc0005e8e60) (3) Data frame handling\nI0615 11:16:01.066391 688 log.go:172] (0xc0005e8e60) (3) Data frame sent\nI0615 11:16:01.066419 688 log.go:172] (0xc0008362c0) Data frame received for 5\nI0615 11:16:01.066431 688 log.go:172] (0xc00051a000) (5) Data frame handling\nI0615 11:16:01.066445 688 log.go:172] (0xc00051a000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0615 11:16:01.066465 688 log.go:172] (0xc0008362c0) Data frame received for 3\nI0615 11:16:01.066489 688 log.go:172] (0xc0005e8e60) (3) Data frame handling\nI0615 11:16:01.066501 688 log.go:172] (0xc0008362c0) Data frame received for 5\nI0615 11:16:01.066527 688 log.go:172] (0xc00051a000) (5) Data frame handling\nI0615 11:16:01.068074 688 log.go:172] (0xc0008362c0) Data frame received for 1\nI0615 11:16:01.068098 688 log.go:172] (0xc00073a640) (1) Data frame handling\nI0615 11:16:01.068117 688 log.go:172] (0xc00073a640) (1) Data frame sent\nI0615 11:16:01.068140 688 log.go:172] (0xc0008362c0) (0xc00073a640) Stream removed, broadcasting: 1\nI0615 11:16:01.068154 688 log.go:172] (0xc0008362c0) Go away received\nI0615 11:16:01.068390 688 log.go:172] (0xc0008362c0) (0xc00073a640) Stream removed, broadcasting: 1\nI0615 11:16:01.068413 688 log.go:172] (0xc0008362c0) (0xc0005e8e60) Stream removed, broadcasting: 3\nI0615 11:16:01.068428 688 log.go:172] (0xc0008362c0) (0xc00051a000) Stream removed, broadcasting: 5\n" Jun 15 11:16:01.074: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:16:01.074: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:16:01.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 11:16:01.285: INFO: stderr: "I0615 11:16:01.202510 710 log.go:172] (0xc000138840) (0xc000740640) Create stream\nI0615 11:16:01.202559 710 log.go:172] (0xc000138840) (0xc000740640) Stream added, broadcasting: 1\nI0615 11:16:01.204977 710 log.go:172] (0xc000138840) Reply frame received for 1\nI0615 11:16:01.205023 710 log.go:172] (0xc000138840) (0xc0005f4d20) Create stream\nI0615 11:16:01.205034 710 log.go:172] (0xc000138840) (0xc0005f4d20) Stream added, broadcasting: 3\nI0615 11:16:01.206228 710 log.go:172] (0xc000138840) Reply frame received for 3\nI0615 11:16:01.206275 710 log.go:172] (0xc000138840) (0xc0006ba000) Create stream\nI0615 11:16:01.206291 710 log.go:172] (0xc000138840) (0xc0006ba000) Stream added, broadcasting: 5\nI0615 11:16:01.207259 710 log.go:172] (0xc000138840) Reply frame received for 5\nI0615 11:16:01.279116 710 log.go:172] (0xc000138840) Data frame received for 3\nI0615 11:16:01.279144 710 log.go:172] (0xc0005f4d20) (3) Data frame handling\nI0615 11:16:01.279161 710 log.go:172] (0xc0005f4d20) (3) Data frame sent\nI0615 11:16:01.279174 710 log.go:172] (0xc000138840) Data frame received for 3\nI0615 11:16:01.279193 710 log.go:172] (0xc0005f4d20) (3) Data frame handling\nI0615 11:16:01.279268 710 log.go:172] (0xc000138840) Data frame received for 5\nI0615 11:16:01.279281 710 log.go:172] (0xc0006ba000) (5) Data frame handling\nI0615 11:16:01.279295 710 log.go:172] (0xc0006ba000) (5) Data frame sent\nI0615 11:16:01.279301 710 log.go:172] (0xc000138840) Data frame received for 5\nI0615 11:16:01.279306 710 log.go:172] (0xc0006ba000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0615 11:16:01.280664 710 log.go:172] (0xc000138840) Data frame received for 1\nI0615 11:16:01.280681 710 log.go:172] (0xc000740640) (1) Data frame handling\nI0615 11:16:01.280698 710 log.go:172] (0xc000740640) (1) Data frame sent\nI0615 11:16:01.280724 710 log.go:172] (0xc000138840) (0xc000740640) Stream removed, broadcasting: 1\nI0615 11:16:01.280749 710 log.go:172] (0xc000138840) Go away received\nI0615 11:16:01.280974 710 log.go:172] (0xc000138840) (0xc000740640) Stream removed, broadcasting: 1\nI0615 11:16:01.281003 710 log.go:172] (0xc000138840) (0xc0005f4d20) Stream removed, broadcasting: 3\nI0615 11:16:01.281011 710 log.go:172] (0xc000138840) (0xc0006ba000) Stream removed, broadcasting: 5\n" Jun 15 11:16:01.285: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 11:16:01.285: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 11:16:01.339: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 15 11:16:11.345: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 11:16:11.346: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 11:16:11.346: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 15 11:16:11.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:16:11.545: INFO: stderr: "I0615 11:16:11.488012 733 log.go:172] (0xc0008582c0) (0xc000722640) Create stream\nI0615 11:16:11.488077 733 log.go:172] (0xc0008582c0) (0xc000722640) Stream added, broadcasting: 1\nI0615 11:16:11.490208 733 log.go:172] (0xc0008582c0) Reply frame received for 1\nI0615 11:16:11.490236 733 log.go:172] (0xc0008582c0) (0xc0007226e0) Create stream\nI0615 11:16:11.490244 733 log.go:172] (0xc0008582c0) (0xc0007226e0) Stream added, broadcasting: 3\nI0615 11:16:11.490880 733 log.go:172] (0xc0008582c0) Reply frame received for 3\nI0615 11:16:11.490912 733 log.go:172] (0xc0008582c0) (0xc0005dabe0) Create stream\nI0615 11:16:11.490923 733 log.go:172] (0xc0008582c0) (0xc0005dabe0) Stream added, broadcasting: 5\nI0615 11:16:11.491637 733 log.go:172] (0xc0008582c0) Reply frame received for 5\nI0615 11:16:11.539604 733 log.go:172] (0xc0008582c0) Data frame received for 5\nI0615 11:16:11.539653 733 log.go:172] (0xc0005dabe0) (5) Data frame handling\nI0615 11:16:11.539684 733 log.go:172] (0xc0008582c0) Data frame received for 3\nI0615 11:16:11.539697 733 log.go:172] (0xc0007226e0) (3) Data frame handling\nI0615 11:16:11.539709 733 log.go:172] (0xc0007226e0) (3) Data frame sent\nI0615 11:16:11.539723 733 log.go:172] (0xc0008582c0) Data frame received for 3\nI0615 11:16:11.539735 733 log.go:172] (0xc0007226e0) (3) Data frame handling\nI0615 11:16:11.540975 733 log.go:172] (0xc0008582c0) Data frame received for 1\nI0615 11:16:11.540995 733 log.go:172] (0xc000722640) (1) Data frame handling\nI0615 11:16:11.541004 733 log.go:172] (0xc000722640) (1) Data frame sent\nI0615 11:16:11.541014 733 log.go:172] (0xc0008582c0) (0xc000722640) Stream removed, broadcasting: 1\nI0615 11:16:11.541036 733 log.go:172] (0xc0008582c0) Go away received\nI0615 11:16:11.541304 733 log.go:172] (0xc0008582c0) (0xc000722640) Stream removed, broadcasting: 1\nI0615 11:16:11.541317 733 log.go:172] (0xc0008582c0) (0xc0007226e0) Stream removed, broadcasting: 3\nI0615 11:16:11.541325 733 log.go:172] (0xc0008582c0) (0xc0005dabe0) Stream removed, broadcasting: 5\n" Jun 15 11:16:11.545: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:16:11.546: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:16:11.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:16:11.829: INFO: stderr: "I0615 11:16:11.718087 756 log.go:172] (0xc000692210) (0xc000127400) Create stream\nI0615 11:16:11.718150 756 log.go:172] (0xc000692210) (0xc000127400) Stream added, broadcasting: 1\nI0615 11:16:11.719762 756 log.go:172] (0xc000692210) Reply frame received for 1\nI0615 11:16:11.719796 756 log.go:172] (0xc000692210) (0xc0003f0000) Create stream\nI0615 11:16:11.719805 756 log.go:172] (0xc000692210) (0xc0003f0000) Stream added, broadcasting: 3\nI0615 11:16:11.720344 756 log.go:172] (0xc000692210) Reply frame received for 3\nI0615 11:16:11.720391 756 log.go:172] (0xc000692210) (0xc0006be000) Create stream\nI0615 11:16:11.720401 756 log.go:172] (0xc000692210) (0xc0006be000) Stream added, broadcasting: 5\nI0615 11:16:11.720875 756 log.go:172] (0xc000692210) Reply frame received for 5\nI0615 11:16:11.820721 756 log.go:172] (0xc000692210) Data frame received for 3\nI0615 11:16:11.820755 756 log.go:172] (0xc0003f0000) (3) Data frame handling\nI0615 11:16:11.820769 756 log.go:172] (0xc0003f0000) (3) Data frame sent\nI0615 11:16:11.820996 756 log.go:172] (0xc000692210) Data frame received for 5\nI0615 11:16:11.821029 756 log.go:172] (0xc0006be000) (5) Data frame handling\nI0615 11:16:11.821215 756 log.go:172] (0xc000692210) Data frame received for 3\nI0615 11:16:11.821236 756 log.go:172] (0xc0003f0000) (3) Data frame handling\nI0615 11:16:11.823038 756 log.go:172] (0xc000692210) Data frame received for 1\nI0615 11:16:11.823057 756 log.go:172] (0xc000127400) (1) Data frame handling\nI0615 11:16:11.823072 756 log.go:172] (0xc000127400) (1) Data frame sent\nI0615 11:16:11.823251 756 log.go:172] (0xc000692210) (0xc000127400) Stream removed, broadcasting: 1\nI0615 11:16:11.823300 756 log.go:172] (0xc000692210) Go away received\nI0615 11:16:11.823568 756 log.go:172] (0xc000692210) (0xc000127400) Stream removed, broadcasting: 1\nI0615 11:16:11.823613 756 log.go:172] (0xc000692210) (0xc0003f0000) Stream removed, broadcasting: 3\nI0615 11:16:11.823638 756 log.go:172] (0xc000692210) (0xc0006be000) Stream removed, broadcasting: 5\n" Jun 15 11:16:11.829: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:16:11.829: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:16:11.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4d9fn ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 11:16:12.037: INFO: stderr: "I0615 11:16:11.937741 778 log.go:172] (0xc0007f82c0) (0xc00070e640) Create stream\nI0615 11:16:11.937787 778 log.go:172] (0xc0007f82c0) (0xc00070e640) Stream added, broadcasting: 1\nI0615 11:16:11.940239 778 log.go:172] (0xc0007f82c0) Reply frame received for 1\nI0615 11:16:11.940311 778 log.go:172] (0xc0007f82c0) (0xc0005c2c80) Create stream\nI0615 11:16:11.940342 778 log.go:172] (0xc0007f82c0) (0xc0005c2c80) Stream added, broadcasting: 3\nI0615 11:16:11.941596 778 log.go:172] (0xc0007f82c0) Reply frame received for 3\nI0615 11:16:11.941632 778 log.go:172] (0xc0007f82c0) (0xc00070e6e0) Create stream\nI0615 11:16:11.941644 778 log.go:172] (0xc0007f82c0) (0xc00070e6e0) Stream added, broadcasting: 5\nI0615 11:16:11.942534 778 log.go:172] (0xc0007f82c0) Reply frame received for 5\nI0615 11:16:12.030754 778 log.go:172] (0xc0007f82c0) Data frame received for 3\nI0615 11:16:12.030793 778 log.go:172] (0xc0005c2c80) (3) Data frame handling\nI0615 11:16:12.030956 778 log.go:172] (0xc0005c2c80) (3) Data frame sent\nI0615 11:16:12.030985 778 log.go:172] (0xc0007f82c0) Data frame received for 3\nI0615 11:16:12.031001 778 log.go:172] (0xc0005c2c80) (3) Data frame handling\nI0615 11:16:12.031383 778 log.go:172] (0xc0007f82c0) Data frame received for 5\nI0615 11:16:12.031410 778 log.go:172] (0xc00070e6e0) (5) Data frame handling\nI0615 11:16:12.033648 778 log.go:172] (0xc0007f82c0) Data frame received for 1\nI0615 11:16:12.033666 778 log.go:172] (0xc00070e640) (1) Data frame handling\nI0615 11:16:12.033682 778 log.go:172] (0xc00070e640) (1) Data frame sent\nI0615 11:16:12.033772 778 log.go:172] (0xc0007f82c0) (0xc00070e640) Stream removed, broadcasting: 1\nI0615 11:16:12.033825 778 log.go:172] (0xc0007f82c0) Go away received\nI0615 11:16:12.033940 778 log.go:172] (0xc0007f82c0) (0xc00070e640) Stream removed, broadcasting: 1\nI0615 11:16:12.033960 778 log.go:172] (0xc0007f82c0) (0xc0005c2c80) Stream removed, broadcasting: 3\nI0615 11:16:12.033974 778 log.go:172] (0xc0007f82c0) (0xc00070e6e0) Stream removed, broadcasting: 5\n" Jun 15 11:16:12.037: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 11:16:12.037: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 11:16:12.037: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:16:12.041: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jun 15 11:16:22.068: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:16:22.068: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:16:22.068: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 15 11:16:22.299: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:22.299: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:22.299: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:22.299: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:22.299: INFO: Jun 15 11:16:22.299: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:23.412: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:23.412: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:23.412: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:23.412: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:23.412: INFO: Jun 15 11:16:23.412: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:24.484: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:24.484: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:24.484: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:24.484: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:24.484: INFO: Jun 15 11:16:24.484: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:25.488: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:25.488: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:25.488: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:25.488: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:25.488: INFO: Jun 15 11:16:25.488: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:26.492: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:26.492: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:26.492: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:26.492: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:26.493: INFO: Jun 15 11:16:26.493: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:27.498: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:27.498: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:27.498: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:27.498: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:27.498: INFO: Jun 15 11:16:27.498: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:28.503: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:28.504: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:28.504: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:28.504: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:28.504: INFO: Jun 15 11:16:28.504: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:29.509: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:29.509: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:29.509: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:29.509: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:29.509: INFO: Jun 15 11:16:29.509: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:30.514: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:30.514: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:30 +0000 UTC }] Jun 15 11:16:30.514: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:30.514: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:30.514: INFO: Jun 15 11:16:30.514: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 15 11:16:31.640: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 11:16:31.640: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:16:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:15:50 +0000 UTC }] Jun 15 11:16:31.640: INFO: Jun 15 11:16:31.640: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4d9fn Jun 15 11:16:32.644: INFO: Scaling statefulset ss to 0 Jun 15 11:16:32.651: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 15 11:16:32.654: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4d9fn Jun 15 11:16:32.656: INFO: Scaling statefulset ss to 0 Jun 15 11:16:32.664: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:16:32.666: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:16:32.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-4d9fn" for this suite. Jun 15 11:16:38.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:16:38.795: INFO: namespace: e2e-tests-statefulset-4d9fn, resource: bindings, ignored listing per whitelist Jun 15 11:16:38.850: INFO: namespace e2e-tests-statefulset-4d9fn deletion completed in 6.0886454s • [SLOW TEST:68.862 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:16:38.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-aebb01d4-aef9-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 11:16:39.003: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-frxbr" to be "success or failure" Jun 15 11:16:39.021: INFO: Pod "pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.132728ms Jun 15 11:16:41.025: INFO: Pod "pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022036466s Jun 15 11:16:43.028: INFO: Pod "pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024903217s STEP: Saw pod success Jun 15 11:16:43.028: INFO: Pod "pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:16:43.030: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 11:16:43.071: INFO: Waiting for pod pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:16:43.078: INFO: Pod pod-projected-configmaps-aebd4896-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:16:43.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-frxbr" for this suite. Jun 15 11:16:49.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:16:49.103: INFO: namespace: e2e-tests-projected-frxbr, resource: bindings, ignored listing per whitelist Jun 15 11:16:49.167: INFO: namespace e2e-tests-projected-frxbr deletion completed in 6.086898597s • [SLOW TEST:10.317 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:16:49.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-rgbnn/configmap-test-b4fd4055-aef9-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 11:16:49.513: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-rgbnn" to be "success or failure" Jun 15 11:16:49.555: INFO: Pod "pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.753491ms Jun 15 11:16:51.559: INFO: Pod "pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046042555s Jun 15 11:16:53.564: INFO: Pod "pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050631037s Jun 15 11:16:55.850: INFO: Pod "pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.336819796s STEP: Saw pod success Jun 15 11:16:55.850: INFO: Pod "pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:16:55.852: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b container env-test: STEP: delete the pod Jun 15 11:16:55.925: INFO: Waiting for pod pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:16:55.941: INFO: Pod pod-configmaps-b4fdf10f-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:16:55.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rgbnn" for this suite. Jun 15 11:17:02.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:02.037: INFO: namespace: e2e-tests-configmap-rgbnn, resource: bindings, ignored listing per whitelist Jun 15 11:17:02.091: INFO: namespace e2e-tests-configmap-rgbnn deletion completed in 6.147010478s • [SLOW TEST:12.924 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:17:02.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:17:02.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-g8zfv" to be "success or failure" Jun 15 11:17:02.204: INFO: Pod "downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.81608ms Jun 15 11:17:04.334: INFO: Pod "downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139689943s Jun 15 11:17:06.338: INFO: Pod "downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143837123s STEP: Saw pod success Jun 15 11:17:06.338: INFO: Pod "downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:17:06.340: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:17:06.367: INFO: Waiting for pod downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:17:06.418: INFO: Pod downwardapi-volume-bc8eebac-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:17:06.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g8zfv" for this suite. Jun 15 11:17:12.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:12.478: INFO: namespace: e2e-tests-projected-g8zfv, resource: bindings, ignored listing per whitelist Jun 15 11:17:12.541: INFO: namespace e2e-tests-projected-g8zfv deletion completed in 6.118505012s • [SLOW TEST:10.450 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:17:12.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:17:12.713: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Jun 15 11:17:12.722: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-8ckpw/daemonsets","resourceVersion":"16068484"},"items":null} Jun 15 11:17:12.724: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-8ckpw/pods","resourceVersion":"16068484"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:17:12.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-8ckpw" for this suite. Jun 15 11:17:18.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:18.829: INFO: namespace: e2e-tests-daemonsets-8ckpw, resource: bindings, ignored listing per whitelist Jun 15 11:17:18.837: INFO: namespace e2e-tests-daemonsets-8ckpw deletion completed in 6.099841712s S [SKIPPING] [6.296 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:17:12.713: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:17:18.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:17:25.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-pdbvj" for this suite. Jun 15 11:17:31.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:31.206: INFO: namespace: e2e-tests-namespaces-pdbvj, resource: bindings, ignored listing per whitelist Jun 15 11:17:31.234: INFO: namespace e2e-tests-namespaces-pdbvj deletion completed in 6.096041979s STEP: Destroying namespace "e2e-tests-nsdeletetest-5gxqh" for this suite. Jun 15 11:17:31.236: INFO: Namespace e2e-tests-nsdeletetest-5gxqh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qmqvm" for this suite. Jun 15 11:17:37.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:37.314: INFO: namespace: e2e-tests-nsdeletetest-qmqvm, resource: bindings, ignored listing per whitelist Jun 15 11:17:37.331: INFO: namespace e2e-tests-nsdeletetest-qmqvm deletion completed in 6.094782597s • [SLOW TEST:18.493 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:17:37.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 15 11:17:37.419: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:17:42.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-xmv5l" for this suite. Jun 15 11:17:48.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:48.935: INFO: namespace: e2e-tests-init-container-xmv5l, resource: bindings, ignored listing per whitelist Jun 15 11:17:48.945: INFO: namespace e2e-tests-init-container-xmv5l deletion completed in 6.099773669s • [SLOW TEST:11.614 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:17:48.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-d883573b-aef9-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:17:49.100: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-8jxq6" to be "success or failure" Jun 15 11:17:49.114: INFO: Pod "pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.184594ms Jun 15 11:17:51.118: INFO: Pod "pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017420323s Jun 15 11:17:53.121: INFO: Pod "pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021021592s STEP: Saw pod success Jun 15 11:17:53.121: INFO: Pod "pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:17:53.123: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 11:17:53.216: INFO: Waiting for pod pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b to disappear Jun 15 11:17:53.258: INFO: Pod pod-projected-secrets-d8855598-aef9-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:17:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8jxq6" for this suite. Jun 15 11:17:59.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:17:59.338: INFO: namespace: e2e-tests-projected-8jxq6, resource: bindings, ignored listing per whitelist Jun 15 11:17:59.351: INFO: namespace e2e-tests-projected-8jxq6 deletion completed in 6.0869703s • [SLOW TEST:10.406 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:17:59.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:18:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-cbd7s" for this suite. Jun 15 11:18:39.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:18:39.375: INFO: namespace: e2e-tests-container-runtime-cbd7s, resource: bindings, ignored listing per whitelist Jun 15 11:18:39.427: INFO: namespace e2e-tests-container-runtime-cbd7s deletion completed in 6.095894641s • [SLOW TEST:40.075 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:18:39.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 15 11:18:39.553: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-blwl8,SelfLink:/api/v1/namespaces/e2e-tests-watch-blwl8/configmaps/e2e-watch-test-watch-closed,UID:f6945043-aef9-11ea-99e8-0242ac110002,ResourceVersion:16068822,Generation:0,CreationTimestamp:2020-06-15 11:18:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 15 11:18:39.554: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-blwl8,SelfLink:/api/v1/namespaces/e2e-tests-watch-blwl8/configmaps/e2e-watch-test-watch-closed,UID:f6945043-aef9-11ea-99e8-0242ac110002,ResourceVersion:16068823,Generation:0,CreationTimestamp:2020-06-15 11:18:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 15 11:18:39.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-blwl8,SelfLink:/api/v1/namespaces/e2e-tests-watch-blwl8/configmaps/e2e-watch-test-watch-closed,UID:f6945043-aef9-11ea-99e8-0242ac110002,ResourceVersion:16068824,Generation:0,CreationTimestamp:2020-06-15 11:18:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 15 11:18:39.630: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-blwl8,SelfLink:/api/v1/namespaces/e2e-tests-watch-blwl8/configmaps/e2e-watch-test-watch-closed,UID:f6945043-aef9-11ea-99e8-0242ac110002,ResourceVersion:16068825,Generation:0,CreationTimestamp:2020-06-15 11:18:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:18:39.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-blwl8" for this suite. Jun 15 11:18:45.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:18:45.676: INFO: namespace: e2e-tests-watch-blwl8, resource: bindings, ignored listing per whitelist Jun 15 11:18:45.727: INFO: namespace e2e-tests-watch-blwl8 deletion completed in 6.093681833s • [SLOW TEST:6.300 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:18:45.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 15 11:18:57.874: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:57.874: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:57.898709 6 log.go:172] (0xc000bd0000) (0xc001f11ea0) Create stream I0615 11:18:57.898739 6 log.go:172] (0xc000bd0000) (0xc001f11ea0) Stream added, broadcasting: 1 I0615 11:18:57.900388 6 log.go:172] (0xc000bd0000) Reply frame received for 1 I0615 11:18:57.900409 6 log.go:172] (0xc000bd0000) (0xc001f11f40) Create stream I0615 11:18:57.900414 6 log.go:172] (0xc000bd0000) (0xc001f11f40) Stream added, broadcasting: 3 I0615 11:18:57.901720 6 log.go:172] (0xc000bd0000) Reply frame received for 3 I0615 11:18:57.901769 6 log.go:172] (0xc000bd0000) (0xc001d1fa40) Create stream I0615 11:18:57.901785 6 log.go:172] (0xc000bd0000) (0xc001d1fa40) Stream added, broadcasting: 5 I0615 11:18:57.905838 6 log.go:172] (0xc000bd0000) Reply frame received for 5 I0615 11:18:57.980735 6 log.go:172] (0xc000bd0000) Data frame received for 5 I0615 11:18:57.980773 6 log.go:172] (0xc001d1fa40) (5) Data frame handling I0615 11:18:57.980795 6 log.go:172] (0xc000bd0000) Data frame received for 3 I0615 11:18:57.980806 6 log.go:172] (0xc001f11f40) (3) Data frame handling I0615 11:18:57.980818 6 log.go:172] (0xc001f11f40) (3) Data frame sent I0615 11:18:57.980829 6 log.go:172] (0xc000bd0000) Data frame received for 3 I0615 11:18:57.980839 6 log.go:172] (0xc001f11f40) (3) Data frame handling I0615 11:18:57.982357 6 log.go:172] (0xc000bd0000) Data frame received for 1 I0615 11:18:57.982439 6 log.go:172] (0xc001f11ea0) (1) Data frame handling I0615 11:18:57.982468 6 log.go:172] (0xc001f11ea0) (1) Data frame sent I0615 11:18:57.982485 6 log.go:172] (0xc000bd0000) (0xc001f11ea0) Stream removed, broadcasting: 1 I0615 11:18:57.982502 6 log.go:172] (0xc000bd0000) Go away received I0615 11:18:57.982640 6 log.go:172] (0xc000bd0000) (0xc001f11ea0) Stream removed, broadcasting: 1 I0615 11:18:57.982661 6 log.go:172] (0xc000bd0000) (0xc001f11f40) Stream removed, broadcasting: 3 I0615 11:18:57.982671 6 log.go:172] (0xc000bd0000) (0xc001d1fa40) Stream removed, broadcasting: 5 Jun 15 11:18:57.982: INFO: Exec stderr: "" Jun 15 11:18:57.982: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:57.982: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.012243 6 log.go:172] (0xc0015b02c0) (0xc000c4bc20) Create stream I0615 11:18:58.012286 6 log.go:172] (0xc0015b02c0) (0xc000c4bc20) Stream added, broadcasting: 1 I0615 11:18:58.014445 6 log.go:172] (0xc0015b02c0) Reply frame received for 1 I0615 11:18:58.014482 6 log.go:172] (0xc0015b02c0) (0xc001d1fae0) Create stream I0615 11:18:58.014495 6 log.go:172] (0xc0015b02c0) (0xc001d1fae0) Stream added, broadcasting: 3 I0615 11:18:58.015514 6 log.go:172] (0xc0015b02c0) Reply frame received for 3 I0615 11:18:58.015554 6 log.go:172] (0xc0015b02c0) (0xc000c4bd60) Create stream I0615 11:18:58.015570 6 log.go:172] (0xc0015b02c0) (0xc000c4bd60) Stream added, broadcasting: 5 I0615 11:18:58.016615 6 log.go:172] (0xc0015b02c0) Reply frame received for 5 I0615 11:18:58.080601 6 log.go:172] (0xc0015b02c0) Data frame received for 3 I0615 11:18:58.080658 6 log.go:172] (0xc001d1fae0) (3) Data frame handling I0615 11:18:58.080703 6 log.go:172] (0xc001d1fae0) (3) Data frame sent I0615 11:18:58.080724 6 log.go:172] (0xc0015b02c0) Data frame received for 3 I0615 11:18:58.080743 6 log.go:172] (0xc001d1fae0) (3) Data frame handling I0615 11:18:58.080999 6 log.go:172] (0xc0015b02c0) Data frame received for 5 I0615 11:18:58.081021 6 log.go:172] (0xc000c4bd60) (5) Data frame handling I0615 11:18:58.082360 6 log.go:172] (0xc0015b02c0) Data frame received for 1 I0615 11:18:58.082394 6 log.go:172] (0xc000c4bc20) (1) Data frame handling I0615 11:18:58.082416 6 log.go:172] (0xc000c4bc20) (1) Data frame sent I0615 11:18:58.082489 6 log.go:172] (0xc0015b02c0) (0xc000c4bc20) Stream removed, broadcasting: 1 I0615 11:18:58.082530 6 log.go:172] (0xc0015b02c0) Go away received I0615 11:18:58.082743 6 log.go:172] (0xc0015b02c0) (0xc000c4bc20) Stream removed, broadcasting: 1 I0615 11:18:58.082786 6 log.go:172] (0xc0015b02c0) (0xc001d1fae0) Stream removed, broadcasting: 3 I0615 11:18:58.082860 6 log.go:172] (0xc0015b02c0) (0xc000c4bd60) Stream removed, broadcasting: 5 Jun 15 11:18:58.082: INFO: Exec stderr: "" Jun 15 11:18:58.082: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.082: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.119110 6 log.go:172] (0xc0022a24d0) (0xc001d1fd60) Create stream I0615 11:18:58.119134 6 log.go:172] (0xc0022a24d0) (0xc001d1fd60) Stream added, broadcasting: 1 I0615 11:18:58.128497 6 log.go:172] (0xc0022a24d0) Reply frame received for 1 I0615 11:18:58.128566 6 log.go:172] (0xc0022a24d0) (0xc001d23680) Create stream I0615 11:18:58.128593 6 log.go:172] (0xc0022a24d0) (0xc001d23680) Stream added, broadcasting: 3 I0615 11:18:58.130610 6 log.go:172] (0xc0022a24d0) Reply frame received for 3 I0615 11:18:58.130673 6 log.go:172] (0xc0022a24d0) (0xc000ce00a0) Create stream I0615 11:18:58.130708 6 log.go:172] (0xc0022a24d0) (0xc000ce00a0) Stream added, broadcasting: 5 I0615 11:18:58.132927 6 log.go:172] (0xc0022a24d0) Reply frame received for 5 I0615 11:18:58.202735 6 log.go:172] (0xc0022a24d0) Data frame received for 5 I0615 11:18:58.202773 6 log.go:172] (0xc000ce00a0) (5) Data frame handling I0615 11:18:58.202827 6 log.go:172] (0xc0022a24d0) Data frame received for 3 I0615 11:18:58.202885 6 log.go:172] (0xc001d23680) (3) Data frame handling I0615 11:18:58.202916 6 log.go:172] (0xc001d23680) (3) Data frame sent I0615 11:18:58.202940 6 log.go:172] (0xc0022a24d0) Data frame received for 3 I0615 11:18:58.202957 6 log.go:172] (0xc001d23680) (3) Data frame handling I0615 11:18:58.204505 6 log.go:172] (0xc0022a24d0) Data frame received for 1 I0615 11:18:58.204520 6 log.go:172] (0xc001d1fd60) (1) Data frame handling I0615 11:18:58.204527 6 log.go:172] (0xc001d1fd60) (1) Data frame sent I0615 11:18:58.204535 6 log.go:172] (0xc0022a24d0) (0xc001d1fd60) Stream removed, broadcasting: 1 I0615 11:18:58.204559 6 log.go:172] (0xc0022a24d0) Go away received I0615 11:18:58.204668 6 log.go:172] (0xc0022a24d0) (0xc001d1fd60) Stream removed, broadcasting: 1 I0615 11:18:58.204748 6 log.go:172] (0xc0022a24d0) (0xc001d23680) Stream removed, broadcasting: 3 I0615 11:18:58.204762 6 log.go:172] (0xc0022a24d0) (0xc000ce00a0) Stream removed, broadcasting: 5 Jun 15 11:18:58.204: INFO: Exec stderr: "" Jun 15 11:18:58.204: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.204: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.242541 6 log.go:172] (0xc0015b0790) (0xc001922000) Create stream I0615 11:18:58.242585 6 log.go:172] (0xc0015b0790) (0xc001922000) Stream added, broadcasting: 1 I0615 11:18:58.244245 6 log.go:172] (0xc0015b0790) Reply frame received for 1 I0615 11:18:58.244311 6 log.go:172] (0xc0015b0790) (0xc001d23720) Create stream I0615 11:18:58.244337 6 log.go:172] (0xc0015b0790) (0xc001d23720) Stream added, broadcasting: 3 I0615 11:18:58.245820 6 log.go:172] (0xc0015b0790) Reply frame received for 3 I0615 11:18:58.245862 6 log.go:172] (0xc0015b0790) (0xc000ce0140) Create stream I0615 11:18:58.245878 6 log.go:172] (0xc0015b0790) (0xc000ce0140) Stream added, broadcasting: 5 I0615 11:18:58.246888 6 log.go:172] (0xc0015b0790) Reply frame received for 5 I0615 11:18:58.312733 6 log.go:172] (0xc0015b0790) Data frame received for 5 I0615 11:18:58.312773 6 log.go:172] (0xc000ce0140) (5) Data frame handling I0615 11:18:58.312805 6 log.go:172] (0xc0015b0790) Data frame received for 3 I0615 11:18:58.312822 6 log.go:172] (0xc001d23720) (3) Data frame handling I0615 11:18:58.312837 6 log.go:172] (0xc001d23720) (3) Data frame sent I0615 11:18:58.312854 6 log.go:172] (0xc0015b0790) Data frame received for 3 I0615 11:18:58.312871 6 log.go:172] (0xc001d23720) (3) Data frame handling I0615 11:18:58.314539 6 log.go:172] (0xc0015b0790) Data frame received for 1 I0615 11:18:58.314559 6 log.go:172] (0xc001922000) (1) Data frame handling I0615 11:18:58.314568 6 log.go:172] (0xc001922000) (1) Data frame sent I0615 11:18:58.314581 6 log.go:172] (0xc0015b0790) (0xc001922000) Stream removed, broadcasting: 1 I0615 11:18:58.314630 6 log.go:172] (0xc0015b0790) Go away received I0615 11:18:58.314698 6 log.go:172] (0xc0015b0790) (0xc001922000) Stream removed, broadcasting: 1 I0615 11:18:58.314726 6 log.go:172] (0xc0015b0790) (0xc001d23720) Stream removed, broadcasting: 3 I0615 11:18:58.314746 6 log.go:172] (0xc0015b0790) (0xc000ce0140) Stream removed, broadcasting: 5 Jun 15 11:18:58.314: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 15 11:18:58.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.314: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.562723 6 log.go:172] (0xc001b2e2c0) (0xc001d239a0) Create stream I0615 11:18:58.562775 6 log.go:172] (0xc001b2e2c0) (0xc001d239a0) Stream added, broadcasting: 1 I0615 11:18:58.565786 6 log.go:172] (0xc001b2e2c0) Reply frame received for 1 I0615 11:18:58.565823 6 log.go:172] (0xc001b2e2c0) (0xc000ce01e0) Create stream I0615 11:18:58.565835 6 log.go:172] (0xc001b2e2c0) (0xc000ce01e0) Stream added, broadcasting: 3 I0615 11:18:58.566771 6 log.go:172] (0xc001b2e2c0) Reply frame received for 3 I0615 11:18:58.566794 6 log.go:172] (0xc001b2e2c0) (0xc001922140) Create stream I0615 11:18:58.566807 6 log.go:172] (0xc001b2e2c0) (0xc001922140) Stream added, broadcasting: 5 I0615 11:18:58.567529 6 log.go:172] (0xc001b2e2c0) Reply frame received for 5 I0615 11:18:58.624585 6 log.go:172] (0xc001b2e2c0) Data frame received for 5 I0615 11:18:58.624640 6 log.go:172] (0xc001922140) (5) Data frame handling I0615 11:18:58.624675 6 log.go:172] (0xc001b2e2c0) Data frame received for 3 I0615 11:18:58.624691 6 log.go:172] (0xc000ce01e0) (3) Data frame handling I0615 11:18:58.624710 6 log.go:172] (0xc000ce01e0) (3) Data frame sent I0615 11:18:58.624732 6 log.go:172] (0xc001b2e2c0) Data frame received for 3 I0615 11:18:58.624746 6 log.go:172] (0xc000ce01e0) (3) Data frame handling I0615 11:18:58.625729 6 log.go:172] (0xc001b2e2c0) Data frame received for 1 I0615 11:18:58.625755 6 log.go:172] (0xc001d239a0) (1) Data frame handling I0615 11:18:58.625777 6 log.go:172] (0xc001d239a0) (1) Data frame sent I0615 11:18:58.625801 6 log.go:172] (0xc001b2e2c0) (0xc001d239a0) Stream removed, broadcasting: 1 I0615 11:18:58.625844 6 log.go:172] (0xc001b2e2c0) Go away received I0615 11:18:58.625901 6 log.go:172] (0xc001b2e2c0) (0xc001d239a0) Stream removed, broadcasting: 1 I0615 11:18:58.625917 6 log.go:172] (0xc001b2e2c0) (0xc000ce01e0) Stream removed, broadcasting: 3 I0615 11:18:58.625931 6 log.go:172] (0xc001b2e2c0) (0xc001922140) Stream removed, broadcasting: 5 Jun 15 11:18:58.625: INFO: Exec stderr: "" Jun 15 11:18:58.625: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.626: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.656653 6 log.go:172] (0xc0015b0c60) (0xc001922500) Create stream I0615 11:18:58.656678 6 log.go:172] (0xc0015b0c60) (0xc001922500) Stream added, broadcasting: 1 I0615 11:18:58.658782 6 log.go:172] (0xc0015b0c60) Reply frame received for 1 I0615 11:18:58.658809 6 log.go:172] (0xc0015b0c60) (0xc000ce0320) Create stream I0615 11:18:58.658819 6 log.go:172] (0xc0015b0c60) (0xc000ce0320) Stream added, broadcasting: 3 I0615 11:18:58.659660 6 log.go:172] (0xc0015b0c60) Reply frame received for 3 I0615 11:18:58.659694 6 log.go:172] (0xc0015b0c60) (0xc001d1fe00) Create stream I0615 11:18:58.659705 6 log.go:172] (0xc0015b0c60) (0xc001d1fe00) Stream added, broadcasting: 5 I0615 11:18:58.660622 6 log.go:172] (0xc0015b0c60) Reply frame received for 5 I0615 11:18:58.714045 6 log.go:172] (0xc0015b0c60) Data frame received for 3 I0615 11:18:58.714091 6 log.go:172] (0xc000ce0320) (3) Data frame handling I0615 11:18:58.714110 6 log.go:172] (0xc000ce0320) (3) Data frame sent I0615 11:18:58.714140 6 log.go:172] (0xc0015b0c60) Data frame received for 5 I0615 11:18:58.714192 6 log.go:172] (0xc001d1fe00) (5) Data frame handling I0615 11:18:58.714231 6 log.go:172] (0xc0015b0c60) Data frame received for 3 I0615 11:18:58.714260 6 log.go:172] (0xc000ce0320) (3) Data frame handling I0615 11:18:58.715772 6 log.go:172] (0xc0015b0c60) Data frame received for 1 I0615 11:18:58.715805 6 log.go:172] (0xc001922500) (1) Data frame handling I0615 11:18:58.715836 6 log.go:172] (0xc001922500) (1) Data frame sent I0615 11:18:58.715868 6 log.go:172] (0xc0015b0c60) (0xc001922500) Stream removed, broadcasting: 1 I0615 11:18:58.715892 6 log.go:172] (0xc0015b0c60) Go away received I0615 11:18:58.716008 6 log.go:172] (0xc0015b0c60) (0xc001922500) Stream removed, broadcasting: 1 I0615 11:18:58.716033 6 log.go:172] (0xc0015b0c60) (0xc000ce0320) Stream removed, broadcasting: 3 I0615 11:18:58.716045 6 log.go:172] (0xc0015b0c60) (0xc001d1fe00) Stream removed, broadcasting: 5 Jun 15 11:18:58.716: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 15 11:18:58.716: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.716: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.748986 6 log.go:172] (0xc001b2e790) (0xc001d23d60) Create stream I0615 11:18:58.749015 6 log.go:172] (0xc001b2e790) (0xc001d23d60) Stream added, broadcasting: 1 I0615 11:18:58.750867 6 log.go:172] (0xc001b2e790) Reply frame received for 1 I0615 11:18:58.750917 6 log.go:172] (0xc001b2e790) (0xc001d23e00) Create stream I0615 11:18:58.750936 6 log.go:172] (0xc001b2e790) (0xc001d23e00) Stream added, broadcasting: 3 I0615 11:18:58.751990 6 log.go:172] (0xc001b2e790) Reply frame received for 3 I0615 11:18:58.752024 6 log.go:172] (0xc001b2e790) (0xc001d1fea0) Create stream I0615 11:18:58.752036 6 log.go:172] (0xc001b2e790) (0xc001d1fea0) Stream added, broadcasting: 5 I0615 11:18:58.754284 6 log.go:172] (0xc001b2e790) Reply frame received for 5 I0615 11:18:58.812875 6 log.go:172] (0xc001b2e790) Data frame received for 5 I0615 11:18:58.812914 6 log.go:172] (0xc001b2e790) Data frame received for 3 I0615 11:18:58.812943 6 log.go:172] (0xc001d23e00) (3) Data frame handling I0615 11:18:58.812957 6 log.go:172] (0xc001d23e00) (3) Data frame sent I0615 11:18:58.812969 6 log.go:172] (0xc001b2e790) Data frame received for 3 I0615 11:18:58.812979 6 log.go:172] (0xc001d23e00) (3) Data frame handling I0615 11:18:58.813007 6 log.go:172] (0xc001d1fea0) (5) Data frame handling I0615 11:18:58.814718 6 log.go:172] (0xc001b2e790) Data frame received for 1 I0615 11:18:58.814742 6 log.go:172] (0xc001d23d60) (1) Data frame handling I0615 11:18:58.814761 6 log.go:172] (0xc001d23d60) (1) Data frame sent I0615 11:18:58.814784 6 log.go:172] (0xc001b2e790) (0xc001d23d60) Stream removed, broadcasting: 1 I0615 11:18:58.814800 6 log.go:172] (0xc001b2e790) Go away received I0615 11:18:58.814864 6 log.go:172] (0xc001b2e790) (0xc001d23d60) Stream removed, broadcasting: 1 I0615 11:18:58.814880 6 log.go:172] (0xc001b2e790) (0xc001d23e00) Stream removed, broadcasting: 3 I0615 11:18:58.814894 6 log.go:172] (0xc001b2e790) (0xc001d1fea0) Stream removed, broadcasting: 5 Jun 15 11:18:58.814: INFO: Exec stderr: "" Jun 15 11:18:58.814: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.814: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.919011 6 log.go:172] (0xc001b2ec60) (0xc000b720a0) Create stream I0615 11:18:58.919050 6 log.go:172] (0xc001b2ec60) (0xc000b720a0) Stream added, broadcasting: 1 I0615 11:18:58.921798 6 log.go:172] (0xc001b2ec60) Reply frame received for 1 I0615 11:18:58.921845 6 log.go:172] (0xc001b2ec60) (0xc000b72320) Create stream I0615 11:18:58.921857 6 log.go:172] (0xc001b2ec60) (0xc000b72320) Stream added, broadcasting: 3 I0615 11:18:58.923136 6 log.go:172] (0xc001b2ec60) Reply frame received for 3 I0615 11:18:58.923184 6 log.go:172] (0xc001b2ec60) (0xc000ce03c0) Create stream I0615 11:18:58.923201 6 log.go:172] (0xc001b2ec60) (0xc000ce03c0) Stream added, broadcasting: 5 I0615 11:18:58.924107 6 log.go:172] (0xc001b2ec60) Reply frame received for 5 I0615 11:18:58.967606 6 log.go:172] (0xc001b2ec60) Data frame received for 5 I0615 11:18:58.967641 6 log.go:172] (0xc000ce03c0) (5) Data frame handling I0615 11:18:58.967665 6 log.go:172] (0xc001b2ec60) Data frame received for 3 I0615 11:18:58.967675 6 log.go:172] (0xc000b72320) (3) Data frame handling I0615 11:18:58.967687 6 log.go:172] (0xc000b72320) (3) Data frame sent I0615 11:18:58.967695 6 log.go:172] (0xc001b2ec60) Data frame received for 3 I0615 11:18:58.967706 6 log.go:172] (0xc000b72320) (3) Data frame handling I0615 11:18:58.968867 6 log.go:172] (0xc001b2ec60) Data frame received for 1 I0615 11:18:58.968894 6 log.go:172] (0xc000b720a0) (1) Data frame handling I0615 11:18:58.968913 6 log.go:172] (0xc000b720a0) (1) Data frame sent I0615 11:18:58.968927 6 log.go:172] (0xc001b2ec60) (0xc000b720a0) Stream removed, broadcasting: 1 I0615 11:18:58.968939 6 log.go:172] (0xc001b2ec60) Go away received I0615 11:18:58.969034 6 log.go:172] (0xc001b2ec60) (0xc000b720a0) Stream removed, broadcasting: 1 I0615 11:18:58.969055 6 log.go:172] (0xc001b2ec60) (0xc000b72320) Stream removed, broadcasting: 3 I0615 11:18:58.969065 6 log.go:172] (0xc001b2ec60) (0xc000ce03c0) Stream removed, broadcasting: 5 Jun 15 11:18:58.969: INFO: Exec stderr: "" Jun 15 11:18:58.969: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:58.969: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:58.999406 6 log.go:172] (0xc000bd04d0) (0xc000ce0640) Create stream I0615 11:18:58.999434 6 log.go:172] (0xc000bd04d0) (0xc000ce0640) Stream added, broadcasting: 1 I0615 11:18:59.006748 6 log.go:172] (0xc000bd04d0) Reply frame received for 1 I0615 11:18:59.006787 6 log.go:172] (0xc000bd04d0) (0xc001f10000) Create stream I0615 11:18:59.006799 6 log.go:172] (0xc000bd04d0) (0xc001f10000) Stream added, broadcasting: 3 I0615 11:18:59.007528 6 log.go:172] (0xc000bd04d0) Reply frame received for 3 I0615 11:18:59.007572 6 log.go:172] (0xc000bd04d0) (0xc0002ce1e0) Create stream I0615 11:18:59.007584 6 log.go:172] (0xc000bd04d0) (0xc0002ce1e0) Stream added, broadcasting: 5 I0615 11:18:59.008370 6 log.go:172] (0xc000bd04d0) Reply frame received for 5 I0615 11:18:59.064884 6 log.go:172] (0xc000bd04d0) Data frame received for 5 I0615 11:18:59.064907 6 log.go:172] (0xc0002ce1e0) (5) Data frame handling I0615 11:18:59.064953 6 log.go:172] (0xc000bd04d0) Data frame received for 3 I0615 11:18:59.065001 6 log.go:172] (0xc001f10000) (3) Data frame handling I0615 11:18:59.065025 6 log.go:172] (0xc001f10000) (3) Data frame sent I0615 11:18:59.065382 6 log.go:172] (0xc000bd04d0) Data frame received for 3 I0615 11:18:59.065415 6 log.go:172] (0xc001f10000) (3) Data frame handling I0615 11:18:59.067036 6 log.go:172] (0xc000bd04d0) Data frame received for 1 I0615 11:18:59.067057 6 log.go:172] (0xc000ce0640) (1) Data frame handling I0615 11:18:59.067068 6 log.go:172] (0xc000ce0640) (1) Data frame sent I0615 11:18:59.067080 6 log.go:172] (0xc000bd04d0) (0xc000ce0640) Stream removed, broadcasting: 1 I0615 11:18:59.067098 6 log.go:172] (0xc000bd04d0) Go away received I0615 11:18:59.067278 6 log.go:172] (0xc000bd04d0) (0xc000ce0640) Stream removed, broadcasting: 1 I0615 11:18:59.067316 6 log.go:172] (0xc000bd04d0) (0xc001f10000) Stream removed, broadcasting: 3 I0615 11:18:59.067343 6 log.go:172] (0xc000bd04d0) (0xc0002ce1e0) Stream removed, broadcasting: 5 Jun 15 11:18:59.067: INFO: Exec stderr: "" Jun 15 11:18:59.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-xslvz PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 11:18:59.067: INFO: >>> kubeConfig: /root/.kube/config I0615 11:18:59.090804 6 log.go:172] (0xc000bd0000) (0xc0003594a0) Create stream I0615 11:18:59.090841 6 log.go:172] (0xc000bd0000) (0xc0003594a0) Stream added, broadcasting: 1 I0615 11:18:59.092202 6 log.go:172] (0xc000bd0000) Reply frame received for 1 I0615 11:18:59.092248 6 log.go:172] (0xc000bd0000) (0xc0002ce320) Create stream I0615 11:18:59.092261 6 log.go:172] (0xc000bd0000) (0xc0002ce320) Stream added, broadcasting: 3 I0615 11:18:59.093296 6 log.go:172] (0xc000bd0000) Reply frame received for 3 I0615 11:18:59.093340 6 log.go:172] (0xc000bd0000) (0xc0002ce640) Create stream I0615 11:18:59.093355 6 log.go:172] (0xc000bd0000) (0xc0002ce640) Stream added, broadcasting: 5 I0615 11:18:59.094184 6 log.go:172] (0xc000bd0000) Reply frame received for 5 I0615 11:18:59.151260 6 log.go:172] (0xc000bd0000) Data frame received for 5 I0615 11:18:59.151370 6 log.go:172] (0xc0002ce640) (5) Data frame handling I0615 11:18:59.151402 6 log.go:172] (0xc000bd0000) Data frame received for 3 I0615 11:18:59.151415 6 log.go:172] (0xc0002ce320) (3) Data frame handling I0615 11:18:59.151429 6 log.go:172] (0xc0002ce320) (3) Data frame sent I0615 11:18:59.151442 6 log.go:172] (0xc000bd0000) Data frame received for 3 I0615 11:18:59.151452 6 log.go:172] (0xc0002ce320) (3) Data frame handling I0615 11:18:59.152582 6 log.go:172] (0xc000bd0000) Data frame received for 1 I0615 11:18:59.152626 6 log.go:172] (0xc0003594a0) (1) Data frame handling I0615 11:18:59.152776 6 log.go:172] (0xc0003594a0) (1) Data frame sent I0615 11:18:59.152795 6 log.go:172] (0xc000bd0000) (0xc0003594a0) Stream removed, broadcasting: 1 I0615 11:18:59.152807 6 log.go:172] (0xc000bd0000) Go away received I0615 11:18:59.152949 6 log.go:172] (0xc000bd0000) (0xc0003594a0) Stream removed, broadcasting: 1 I0615 11:18:59.152974 6 log.go:172] (0xc000bd0000) (0xc0002ce320) Stream removed, broadcasting: 3 I0615 11:18:59.152984 6 log.go:172] (0xc000bd0000) (0xc0002ce640) Stream removed, broadcasting: 5 Jun 15 11:18:59.152: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:18:59.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-xslvz" for this suite. Jun 15 11:20:03.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:20:03.190: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-xslvz, resource: bindings, ignored listing per whitelist Jun 15 11:20:03.243: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-xslvz deletion completed in 1m4.087026147s • [SLOW TEST:77.515 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:20:03.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 15 11:20:03.335: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:20:12.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-hfqbj" for this suite. Jun 15 11:20:34.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:20:34.250: INFO: namespace: e2e-tests-init-container-hfqbj, resource: bindings, ignored listing per whitelist Jun 15 11:20:34.268: INFO: namespace e2e-tests-init-container-hfqbj deletion completed in 22.118405015s • [SLOW TEST:31.025 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:20:34.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-s74z2 Jun 15 11:20:38.391: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-s74z2 STEP: checking the pod's current state and verifying that restartCount is present Jun 15 11:20:38.394: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:24:38.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s74z2" for this suite. Jun 15 11:24:45.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:24:45.182: INFO: namespace: e2e-tests-container-probe-s74z2, resource: bindings, ignored listing per whitelist Jun 15 11:24:45.203: INFO: namespace e2e-tests-container-probe-s74z2 deletion completed in 6.105943909s • [SLOW TEST:250.934 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:24:45.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d09cf852-aefa-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d09cf852-aefa-11ea-99db-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:24:51.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bdc6p" for this suite. Jun 15 11:25:13.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:25:13.537: INFO: namespace: e2e-tests-projected-bdc6p, resource: bindings, ignored listing per whitelist Jun 15 11:25:13.550: INFO: namespace e2e-tests-projected-bdc6p deletion completed in 22.131992473s • [SLOW TEST:28.347 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:25:13.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:25:13.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-7wcd8" to be "success or failure" Jun 15 11:25:13.888: INFO: Pod "downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.780878ms Jun 15 11:25:15.892: INFO: Pod "downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026296169s Jun 15 11:25:17.916: INFO: Pod "downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050815867s STEP: Saw pod success Jun 15 11:25:17.916: INFO: Pod "downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:25:17.920: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:25:18.044: INFO: Waiting for pod downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b to disappear Jun 15 11:25:18.058: INFO: Pod downwardapi-volume-e19d08c4-aefa-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:25:18.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7wcd8" for this suite. Jun 15 11:25:24.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:25:24.119: INFO: namespace: e2e-tests-projected-7wcd8, resource: bindings, ignored listing per whitelist Jun 15 11:25:24.162: INFO: namespace e2e-tests-projected-7wcd8 deletion completed in 6.100662498s • [SLOW TEST:10.612 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:25:24.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-e7e298cf-aefa-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 11:25:24.408: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-h2p7x" to be "success or failure" Jun 15 11:25:24.412: INFO: Pod "pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.65965ms Jun 15 11:25:26.416: INFO: Pod "pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007601712s Jun 15 11:25:28.420: INFO: Pod "pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011807453s STEP: Saw pod success Jun 15 11:25:28.420: INFO: Pod "pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:25:28.424: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 11:25:28.456: INFO: Waiting for pod pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b to disappear Jun 15 11:25:28.497: INFO: Pod pod-projected-configmaps-e7e77aec-aefa-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:25:28.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h2p7x" for this suite. Jun 15 11:25:34.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:25:34.558: INFO: namespace: e2e-tests-projected-h2p7x, resource: bindings, ignored listing per whitelist Jun 15 11:25:34.626: INFO: namespace e2e-tests-projected-h2p7x deletion completed in 6.124549946s • [SLOW TEST:10.464 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:25:34.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 11:25:34.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-r6gc2' Jun 15 11:25:38.633: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 15 11:25:38.633: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Jun 15 11:25:43.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-r6gc2' Jun 15 11:25:43.137: INFO: stderr: "" Jun 15 11:25:43.137: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:25:43.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r6gc2" for this suite. Jun 15 11:26:05.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:26:05.286: INFO: namespace: e2e-tests-kubectl-r6gc2, resource: bindings, ignored listing per whitelist Jun 15 11:26:05.292: INFO: namespace e2e-tests-kubectl-r6gc2 deletion completed in 22.147480191s • [SLOW TEST:30.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:26:05.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 15 11:26:05.410: INFO: Waiting up to 5m0s for pod "pod-005534d6-aefb-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-s96rg" to be "success or failure" Jun 15 11:26:05.422: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.372116ms Jun 15 11:26:07.425: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015259653s Jun 15 11:26:09.429: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018706207s Jun 15 11:26:11.432: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022159962s Jun 15 11:26:13.435: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025187144s Jun 15 11:26:15.439: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.029159827s STEP: Saw pod success Jun 15 11:26:15.439: INFO: Pod "pod-005534d6-aefb-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:26:15.442: INFO: Trying to get logs from node hunter-worker2 pod pod-005534d6-aefb-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 11:26:15.549: INFO: Waiting for pod pod-005534d6-aefb-11ea-99db-0242ac11001b to disappear Jun 15 11:26:16.372: INFO: Pod pod-005534d6-aefb-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:26:16.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-s96rg" for this suite. Jun 15 11:26:25.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:26:25.342: INFO: namespace: e2e-tests-emptydir-s96rg, resource: bindings, ignored listing per whitelist Jun 15 11:26:25.381: INFO: namespace e2e-tests-emptydir-s96rg deletion completed in 9.003382563s • [SLOW TEST:20.089 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:26:25.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-0d1ece5b-aefb-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:26:28.557: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-z8rtg" to be "success or failure" Jun 15 11:26:29.621: INFO: Pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.063470696s Jun 15 11:26:31.648: INFO: Pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.091318189s Jun 15 11:26:33.654: INFO: Pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.096660003s Jun 15 11:26:35.657: INFO: Pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 7.100267617s Jun 15 11:26:37.660: INFO: Pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.10331597s STEP: Saw pod success Jun 15 11:26:37.660: INFO: Pod "pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:26:37.663: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 15 11:26:37.746: INFO: Waiting for pod pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b to disappear Jun 15 11:26:37.752: INFO: Pod pod-projected-secrets-0d1f7471-aefb-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:26:37.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z8rtg" for this suite. Jun 15 11:26:44.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:26:44.894: INFO: namespace: e2e-tests-projected-z8rtg, resource: bindings, ignored listing per whitelist Jun 15 11:26:44.945: INFO: namespace e2e-tests-projected-z8rtg deletion completed in 7.190861259s • [SLOW TEST:19.564 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:26:44.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:26:51.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-h6qnh" for this suite. Jun 15 11:28:19.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:28:19.147: INFO: namespace: e2e-tests-kubelet-test-h6qnh, resource: bindings, ignored listing per whitelist Jun 15 11:28:19.168: INFO: namespace e2e-tests-kubelet-test-h6qnh deletion completed in 1m28.081344478s • [SLOW TEST:94.223 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:28:19.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 15 11:28:19.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:19.581: INFO: stderr: "" Jun 15 11:28:19.581: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 15 11:28:19.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:19.705: INFO: stderr: "" Jun 15 11:28:19.705: INFO: stdout: "update-demo-nautilus-djhs7 update-demo-nautilus-rkprs " Jun 15 11:28:19.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djhs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:19.804: INFO: stderr: "" Jun 15 11:28:19.804: INFO: stdout: "" Jun 15 11:28:19.804: INFO: update-demo-nautilus-djhs7 is created but not running Jun 15 11:28:24.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:25.117: INFO: stderr: "" Jun 15 11:28:25.117: INFO: stdout: "update-demo-nautilus-djhs7 update-demo-nautilus-rkprs " Jun 15 11:28:25.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djhs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:25.214: INFO: stderr: "" Jun 15 11:28:25.214: INFO: stdout: "" Jun 15 11:28:25.214: INFO: update-demo-nautilus-djhs7 is created but not running Jun 15 11:28:30.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:30.336: INFO: stderr: "" Jun 15 11:28:30.336: INFO: stdout: "update-demo-nautilus-djhs7 update-demo-nautilus-rkprs " Jun 15 11:28:30.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djhs7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:30.445: INFO: stderr: "" Jun 15 11:28:30.445: INFO: stdout: "true" Jun 15 11:28:30.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-djhs7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:30.536: INFO: stderr: "" Jun 15 11:28:30.536: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:28:30.536: INFO: validating pod update-demo-nautilus-djhs7 Jun 15 11:28:30.547: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:28:30.547: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:28:30.547: INFO: update-demo-nautilus-djhs7 is verified up and running Jun 15 11:28:30.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkprs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:30.655: INFO: stderr: "" Jun 15 11:28:30.655: INFO: stdout: "true" Jun 15 11:28:30.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkprs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:30.776: INFO: stderr: "" Jun 15 11:28:30.776: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:28:30.776: INFO: validating pod update-demo-nautilus-rkprs Jun 15 11:28:30.801: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:28:30.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:28:30.801: INFO: update-demo-nautilus-rkprs is verified up and running STEP: scaling down the replication controller Jun 15 11:28:30.803: INFO: scanned /root for discovery docs: Jun 15 11:28:30.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:32.062: INFO: stderr: "" Jun 15 11:28:32.062: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 15 11:28:32.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:32.247: INFO: stderr: "" Jun 15 11:28:32.247: INFO: stdout: "update-demo-nautilus-djhs7 update-demo-nautilus-rkprs " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 15 11:28:37.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:37.353: INFO: stderr: "" Jun 15 11:28:37.353: INFO: stdout: "update-demo-nautilus-djhs7 update-demo-nautilus-rkprs " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 15 11:28:42.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:42.445: INFO: stderr: "" Jun 15 11:28:42.445: INFO: stdout: "update-demo-nautilus-rkprs " Jun 15 11:28:42.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkprs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:42.528: INFO: stderr: "" Jun 15 11:28:42.529: INFO: stdout: "true" Jun 15 11:28:42.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkprs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:42.628: INFO: stderr: "" Jun 15 11:28:42.628: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:28:42.628: INFO: validating pod update-demo-nautilus-rkprs Jun 15 11:28:42.630: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:28:42.631: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:28:42.631: INFO: update-demo-nautilus-rkprs is verified up and running STEP: scaling up the replication controller Jun 15 11:28:42.632: INFO: scanned /root for discovery docs: Jun 15 11:28:42.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:43.773: INFO: stderr: "" Jun 15 11:28:43.773: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 15 11:28:43.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:43.863: INFO: stderr: "" Jun 15 11:28:43.863: INFO: stdout: "update-demo-nautilus-54dql update-demo-nautilus-rkprs " Jun 15 11:28:43.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54dql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:43.953: INFO: stderr: "" Jun 15 11:28:43.953: INFO: stdout: "" Jun 15 11:28:43.953: INFO: update-demo-nautilus-54dql is created but not running Jun 15 11:28:48.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:49.466: INFO: stderr: "" Jun 15 11:28:49.466: INFO: stdout: "update-demo-nautilus-54dql update-demo-nautilus-rkprs " Jun 15 11:28:49.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54dql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:49.560: INFO: stderr: "" Jun 15 11:28:49.560: INFO: stdout: "" Jun 15 11:28:49.560: INFO: update-demo-nautilus-54dql is created but not running Jun 15 11:28:54.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:54.663: INFO: stderr: "" Jun 15 11:28:54.663: INFO: stdout: "update-demo-nautilus-54dql update-demo-nautilus-rkprs " Jun 15 11:28:54.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54dql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:54.757: INFO: stderr: "" Jun 15 11:28:54.757: INFO: stdout: "true" Jun 15 11:28:54.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54dql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:54.861: INFO: stderr: "" Jun 15 11:28:54.861: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:28:54.861: INFO: validating pod update-demo-nautilus-54dql Jun 15 11:28:54.864: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:28:54.864: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:28:54.864: INFO: update-demo-nautilus-54dql is verified up and running Jun 15 11:28:54.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkprs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:54.975: INFO: stderr: "" Jun 15 11:28:54.975: INFO: stdout: "true" Jun 15 11:28:54.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkprs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:55.075: INFO: stderr: "" Jun 15 11:28:55.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:28:55.075: INFO: validating pod update-demo-nautilus-rkprs Jun 15 11:28:55.078: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:28:55.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:28:55.078: INFO: update-demo-nautilus-rkprs is verified up and running STEP: using delete to clean up resources Jun 15 11:28:55.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:55.168: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 11:28:55.168: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 15 11:28:55.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ldn28' Jun 15 11:28:55.262: INFO: stderr: "No resources found.\n" Jun 15 11:28:55.262: INFO: stdout: "" Jun 15 11:28:55.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ldn28 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 15 11:28:55.371: INFO: stderr: "" Jun 15 11:28:55.371: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:28:55.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ldn28" for this suite. Jun 15 11:29:28.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:29:28.336: INFO: namespace: e2e-tests-kubectl-ldn28, resource: bindings, ignored listing per whitelist Jun 15 11:29:28.369: INFO: namespace e2e-tests-kubectl-ldn28 deletion completed in 32.995146284s • [SLOW TEST:69.201 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:29:28.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:29:31.787: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 15 11:29:37.862: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 15 11:29:55.868: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 15 11:29:56.352: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-mwnzm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-mwnzm/deployments/test-cleanup-deployment,UID:89b777ce-aefb-11ea-99e8-0242ac110002,ResourceVersion:16070484,Generation:1,CreationTimestamp:2020-06-15 11:29:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 15 11:29:56.355: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:29:56.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-mwnzm" for this suite. Jun 15 11:30:05.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:30:05.125: INFO: namespace: e2e-tests-deployment-mwnzm, resource: bindings, ignored listing per whitelist Jun 15 11:30:05.160: INFO: namespace e2e-tests-deployment-mwnzm deletion completed in 8.358466976s • [SLOW TEST:36.790 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:30:05.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 15 11:30:14.679: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:30:15.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-m2rm7" for this suite. Jun 15 11:30:39.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:30:39.757: INFO: namespace: e2e-tests-replicaset-m2rm7, resource: bindings, ignored listing per whitelist Jun 15 11:30:39.780: INFO: namespace e2e-tests-replicaset-m2rm7 deletion completed in 24.075802508s • [SLOW TEST:34.620 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:30:39.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 15 11:30:40.712: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:40.715: INFO: Number of nodes with available pods: 0 Jun 15 11:30:40.715: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:42.239: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:42.243: INFO: Number of nodes with available pods: 0 Jun 15 11:30:42.243: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:43.334: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:43.570: INFO: Number of nodes with available pods: 0 Jun 15 11:30:43.570: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:44.299: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:44.473: INFO: Number of nodes with available pods: 0 Jun 15 11:30:44.473: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:45.031: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:45.034: INFO: Number of nodes with available pods: 0 Jun 15 11:30:45.034: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:45.718: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:45.721: INFO: Number of nodes with available pods: 0 Jun 15 11:30:45.721: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:47.293: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:47.295: INFO: Number of nodes with available pods: 0 Jun 15 11:30:47.295: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:48.071: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:48.095: INFO: Number of nodes with available pods: 0 Jun 15 11:30:48.095: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:48.803: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:48.897: INFO: Number of nodes with available pods: 0 Jun 15 11:30:48.897: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:49.719: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:49.723: INFO: Number of nodes with available pods: 0 Jun 15 11:30:49.723: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:30:51.133: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:51.162: INFO: Number of nodes with available pods: 2 Jun 15 11:30:51.162: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 15 11:30:51.662: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:30:51.700: INFO: Number of nodes with available pods: 2 Jun 15 11:30:51.700: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7vprm, will wait for the garbage collector to delete the pods Jun 15 11:31:02.634: INFO: Deleting DaemonSet.extensions daemon-set took: 525.92218ms Jun 15 11:31:03.435: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.192065ms Jun 15 11:31:41.838: INFO: Number of nodes with available pods: 0 Jun 15 11:31:41.838: INFO: Number of running nodes: 0, number of available pods: 0 Jun 15 11:31:41.840: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7vprm/daemonsets","resourceVersion":"16070806"},"items":null} Jun 15 11:31:41.842: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7vprm/pods","resourceVersion":"16070806"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:31:41.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-7vprm" for this suite. Jun 15 11:31:49.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:31:49.874: INFO: namespace: e2e-tests-daemonsets-7vprm, resource: bindings, ignored listing per whitelist Jun 15 11:31:49.919: INFO: namespace e2e-tests-daemonsets-7vprm deletion completed in 8.066987016s • [SLOW TEST:70.138 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:31:49.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:31:50.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-7vknj" to be "success or failure" Jun 15 11:31:50.343: INFO: Pod "downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.054816ms Jun 15 11:31:52.346: INFO: Pod "downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033458236s Jun 15 11:31:54.349: INFO: Pod "downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036325778s Jun 15 11:31:56.353: INFO: Pod "downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040042253s STEP: Saw pod success Jun 15 11:31:56.353: INFO: Pod "downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:31:56.355: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:31:56.404: INFO: Waiting for pod downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b to disappear Jun 15 11:31:56.425: INFO: Pod downwardapi-volume-cddf9996-aefb-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:31:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7vknj" for this suite. Jun 15 11:32:02.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:32:02.539: INFO: namespace: e2e-tests-downward-api-7vknj, resource: bindings, ignored listing per whitelist Jun 15 11:32:02.579: INFO: namespace e2e-tests-downward-api-7vknj deletion completed in 6.150630563s • [SLOW TEST:12.660 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:32:02.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Jun 15 11:32:02.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 15 11:32:02.872: INFO: stderr: "" Jun 15 11:32:02.872: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:32:02.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j2ngn" for this suite. Jun 15 11:32:08.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:32:08.938: INFO: namespace: e2e-tests-kubectl-j2ngn, resource: bindings, ignored listing per whitelist Jun 15 11:32:08.951: INFO: namespace e2e-tests-kubectl-j2ngn deletion completed in 6.075931067s • [SLOW TEST:6.372 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:32:08.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:32:09.017: INFO: Creating ReplicaSet my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b Jun 15 11:32:09.043: INFO: Pod name my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b: Found 0 pods out of 1 Jun 15 11:32:14.066: INFO: Pod name my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b: Found 1 pods out of 1 Jun 15 11:32:14.066: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b" is running Jun 15 11:32:14.068: INFO: Pod "my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b-zjvt6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 11:32:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 11:32:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 11:32:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-15 11:32:09 +0000 UTC Reason: Message:}]) Jun 15 11:32:14.068: INFO: Trying to dial the pod Jun 15 11:32:19.079: INFO: Controller my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b: Got expected result from replica 1 [my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b-zjvt6]: "my-hostname-basic-d9136a14-aefb-11ea-99db-0242ac11001b-zjvt6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:32:19.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-8c727" for this suite. Jun 15 11:32:25.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:32:25.120: INFO: namespace: e2e-tests-replicaset-8c727, resource: bindings, ignored listing per whitelist Jun 15 11:32:25.189: INFO: namespace e2e-tests-replicaset-8c727 deletion completed in 6.106743201s • [SLOW TEST:16.237 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:32:25.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0615 11:33:06.532423 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 15 11:33:06.532: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:33:06.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-pxnzr" for this suite. Jun 15 11:33:26.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:33:26.917: INFO: namespace: e2e-tests-gc-pxnzr, resource: bindings, ignored listing per whitelist Jun 15 11:33:26.962: INFO: namespace e2e-tests-gc-pxnzr deletion completed in 20.30015121s • [SLOW TEST:61.772 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:33:26.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:33:27.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-qb65c" to be "success or failure" Jun 15 11:33:27.529: INFO: Pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 238.067615ms Jun 15 11:33:29.672: INFO: Pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380976719s Jun 15 11:33:31.675: INFO: Pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384007371s Jun 15 11:33:33.762: INFO: Pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470780124s Jun 15 11:33:35.766: INFO: Pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.474362382s STEP: Saw pod success Jun 15 11:33:35.766: INFO: Pod "downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:33:35.768: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:33:35.938: INFO: Waiting for pod downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b to disappear Jun 15 11:33:36.303: INFO: Pod downwardapi-volume-07a53c10-aefc-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:33:36.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qb65c" for this suite. Jun 15 11:33:42.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:33:42.592: INFO: namespace: e2e-tests-projected-qb65c, resource: bindings, ignored listing per whitelist Jun 15 11:33:42.597: INFO: namespace e2e-tests-projected-qb65c deletion completed in 6.28879599s • [SLOW TEST:15.635 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:33:42.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-10eedb70-aefc-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:33:42.742: INFO: Waiting up to 5m0s for pod "pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-98w5j" to be "success or failure" Jun 15 11:33:42.772: INFO: Pod "pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.809134ms Jun 15 11:33:44.775: INFO: Pod "pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033493263s Jun 15 11:33:46.780: INFO: Pod "pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038541581s Jun 15 11:33:48.784: INFO: Pod "pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042693252s STEP: Saw pod success Jun 15 11:33:48.784: INFO: Pod "pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:33:48.788: INFO: Trying to get logs from node hunter-worker pod pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 11:33:48.807: INFO: Waiting for pod pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b to disappear Jun 15 11:33:48.818: INFO: Pod pod-secrets-10ef744d-aefc-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:33:48.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-98w5j" for this suite. Jun 15 11:33:54.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:33:54.874: INFO: namespace: e2e-tests-secrets-98w5j, resource: bindings, ignored listing per whitelist Jun 15 11:33:54.917: INFO: namespace e2e-tests-secrets-98w5j deletion completed in 6.096987743s • [SLOW TEST:12.320 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:33:54.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-4b6hh/secret-test-184249b6-aefc-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:33:55.035: INFO: Waiting up to 5m0s for pod "pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-4b6hh" to be "success or failure" Jun 15 11:33:55.054: INFO: Pod "pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.781844ms Jun 15 11:33:57.140: INFO: Pod "pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1049279s Jun 15 11:33:59.143: INFO: Pod "pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108314133s Jun 15 11:34:01.434: INFO: Pod "pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.398565644s STEP: Saw pod success Jun 15 11:34:01.434: INFO: Pod "pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:34:01.437: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b container env-test: STEP: delete the pod Jun 15 11:34:01.459: INFO: Waiting for pod pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b to disappear Jun 15 11:34:01.470: INFO: Pod pod-configmaps-184346f8-aefc-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:34:01.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4b6hh" for this suite. Jun 15 11:34:07.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:34:07.551: INFO: namespace: e2e-tests-secrets-4b6hh, resource: bindings, ignored listing per whitelist Jun 15 11:34:07.577: INFO: namespace e2e-tests-secrets-4b6hh deletion completed in 6.103973262s • [SLOW TEST:12.660 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:34:07.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Jun 15 11:34:07.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:08.252: INFO: stderr: "" Jun 15 11:34:08.252: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 15 11:34:08.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:08.392: INFO: stderr: "" Jun 15 11:34:08.392: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jun 15 11:34:13.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:13.714: INFO: stderr: "" Jun 15 11:34:13.714: INFO: stdout: "update-demo-nautilus-9kz4t update-demo-nautilus-gnr8f " Jun 15 11:34:13.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kz4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:14.612: INFO: stderr: "" Jun 15 11:34:14.612: INFO: stdout: "" Jun 15 11:34:14.612: INFO: update-demo-nautilus-9kz4t is created but not running Jun 15 11:34:19.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:19.735: INFO: stderr: "" Jun 15 11:34:19.735: INFO: stdout: "update-demo-nautilus-9kz4t update-demo-nautilus-gnr8f " Jun 15 11:34:19.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kz4t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:19.839: INFO: stderr: "" Jun 15 11:34:19.839: INFO: stdout: "true" Jun 15 11:34:19.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kz4t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:19.942: INFO: stderr: "" Jun 15 11:34:19.942: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:34:19.942: INFO: validating pod update-demo-nautilus-9kz4t Jun 15 11:34:19.960: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:34:19.960: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:34:19.960: INFO: update-demo-nautilus-9kz4t is verified up and running Jun 15 11:34:19.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gnr8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:20.056: INFO: stderr: "" Jun 15 11:34:20.056: INFO: stdout: "true" Jun 15 11:34:20.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gnr8f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:34:20.143: INFO: stderr: "" Jun 15 11:34:20.143: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 11:34:20.143: INFO: validating pod update-demo-nautilus-gnr8f Jun 15 11:34:20.157: INFO: got data: { "image": "nautilus.jpg" } Jun 15 11:34:20.157: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 11:34:20.157: INFO: update-demo-nautilus-gnr8f is verified up and running STEP: rolling-update to new replication controller Jun 15 11:34:20.158: INFO: scanned /root for discovery docs: Jun 15 11:34:20.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:35:07.903: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 15 11:35:07.903: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 15 11:35:07.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:35:08.012: INFO: stderr: "" Jun 15 11:35:08.012: INFO: stdout: "update-demo-kitten-5lb2c update-demo-kitten-7tnw8 " Jun 15 11:35:08.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5lb2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:35:08.135: INFO: stderr: "" Jun 15 11:35:08.135: INFO: stdout: "true" Jun 15 11:35:08.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5lb2c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:35:08.247: INFO: stderr: "" Jun 15 11:35:08.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 15 11:35:08.247: INFO: validating pod update-demo-kitten-5lb2c Jun 15 11:35:08.262: INFO: got data: { "image": "kitten.jpg" } Jun 15 11:35:08.262: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 15 11:35:08.262: INFO: update-demo-kitten-5lb2c is verified up and running Jun 15 11:35:08.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7tnw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:35:08.355: INFO: stderr: "" Jun 15 11:35:08.356: INFO: stdout: "true" Jun 15 11:35:08.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7tnw8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2zzcs' Jun 15 11:35:08.462: INFO: stderr: "" Jun 15 11:35:08.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 15 11:35:08.462: INFO: validating pod update-demo-kitten-7tnw8 Jun 15 11:35:08.472: INFO: got data: { "image": "kitten.jpg" } Jun 15 11:35:08.472: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 15 11:35:08.472: INFO: update-demo-kitten-7tnw8 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:35:08.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2zzcs" for this suite. Jun 15 11:35:48.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:35:48.611: INFO: namespace: e2e-tests-kubectl-2zzcs, resource: bindings, ignored listing per whitelist Jun 15 11:35:48.635: INFO: namespace e2e-tests-kubectl-2zzcs deletion completed in 40.161117224s • [SLOW TEST:101.058 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:35:48.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:35:48.907: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-mf8cl" to be "success or failure" Jun 15 11:35:48.916: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551918ms Jun 15 11:35:51.156: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2488807s Jun 15 11:35:53.381: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474150008s Jun 15 11:35:55.622: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.715250717s Jun 15 11:35:58.106: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 9.198778957s Jun 15 11:36:00.110: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.203032361s STEP: Saw pod success Jun 15 11:36:00.110: INFO: Pod "downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:36:00.113: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:36:02.606: INFO: Waiting for pod downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b to disappear Jun 15 11:36:03.212: INFO: Pod downwardapi-volume-5c1d63a8-aefc-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:36:03.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mf8cl" for this suite. Jun 15 11:36:11.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:36:11.914: INFO: namespace: e2e-tests-downward-api-mf8cl, resource: bindings, ignored listing per whitelist Jun 15 11:36:11.944: INFO: namespace e2e-tests-downward-api-mf8cl deletion completed in 8.321347456s • [SLOW TEST:23.308 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:36:11.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 15 11:36:12.021: INFO: Waiting up to 5m0s for pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-rfm7f" to be "success or failure" Jun 15 11:36:12.106: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 85.128325ms Jun 15 11:36:14.144: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122534181s Jun 15 11:36:16.779: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.757786448s Jun 15 11:36:18.782: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760928669s Jun 15 11:36:21.362: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 9.341396781s Jun 15 11:36:23.365: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.344182032s STEP: Saw pod success Jun 15 11:36:23.365: INFO: Pod "downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:36:23.367: INFO: Trying to get logs from node hunter-worker2 pod downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 11:36:24.244: INFO: Waiting for pod downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b to disappear Jun 15 11:36:24.246: INFO: Pod downward-api-69e9dd47-aefc-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:36:24.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rfm7f" for this suite. Jun 15 11:36:32.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:36:32.324: INFO: namespace: e2e-tests-downward-api-rfm7f, resource: bindings, ignored listing per whitelist Jun 15 11:36:32.359: INFO: namespace e2e-tests-downward-api-rfm7f deletion completed in 8.109923895s • [SLOW TEST:20.415 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:36:32.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-zghqp [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-zghqp STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-zghqp STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-zghqp STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-zghqp Jun 15 11:36:46.311: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zghqp, name: ss-0, uid: 7ce97890-aefc-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. Jun 15 11:36:51.503: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zghqp, name: ss-0, uid: 7ce97890-aefc-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 15 11:36:51.615: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-zghqp, name: ss-0, uid: 7ce97890-aefc-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. Jun 15 11:36:51.618: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-zghqp STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-zghqp STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-zghqp and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 15 11:36:55.924: INFO: Deleting all statefulset in ns e2e-tests-statefulset-zghqp Jun 15 11:36:55.927: INFO: Scaling statefulset ss to 0 Jun 15 11:37:15.946: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 11:37:15.949: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:37:15.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-zghqp" for this suite. Jun 15 11:37:22.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:37:22.106: INFO: namespace: e2e-tests-statefulset-zghqp, resource: bindings, ignored listing per whitelist Jun 15 11:37:22.171: INFO: namespace e2e-tests-statefulset-zghqp deletion completed in 6.141052538s • [SLOW TEST:49.812 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:37:22.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 15 11:37:22.289: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:22.291: INFO: Number of nodes with available pods: 0 Jun 15 11:37:22.291: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:23.297: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:23.302: INFO: Number of nodes with available pods: 0 Jun 15 11:37:23.302: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:24.314: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:24.318: INFO: Number of nodes with available pods: 0 Jun 15 11:37:24.318: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:25.900: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:26.167: INFO: Number of nodes with available pods: 0 Jun 15 11:37:26.167: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:26.539: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:26.543: INFO: Number of nodes with available pods: 0 Jun 15 11:37:26.543: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:27.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:27.334: INFO: Number of nodes with available pods: 1 Jun 15 11:37:27.334: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:28.295: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:28.296: INFO: Number of nodes with available pods: 2 Jun 15 11:37:28.296: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 15 11:37:28.350: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:28.418: INFO: Number of nodes with available pods: 1 Jun 15 11:37:28.418: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:29.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:29.426: INFO: Number of nodes with available pods: 1 Jun 15 11:37:29.426: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:30.515: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:30.567: INFO: Number of nodes with available pods: 1 Jun 15 11:37:30.567: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:31.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:31.436: INFO: Number of nodes with available pods: 1 Jun 15 11:37:31.437: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:32.594: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:32.597: INFO: Number of nodes with available pods: 1 Jun 15 11:37:32.597: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:33.534: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:33.561: INFO: Number of nodes with available pods: 1 Jun 15 11:37:33.561: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:34.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:34.425: INFO: Number of nodes with available pods: 1 Jun 15 11:37:34.425: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:35.425: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:35.428: INFO: Number of nodes with available pods: 1 Jun 15 11:37:35.428: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:36.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:36.424: INFO: Number of nodes with available pods: 1 Jun 15 11:37:36.424: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:37.449: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:37.452: INFO: Number of nodes with available pods: 1 Jun 15 11:37:37.452: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:40.156: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:41.348: INFO: Number of nodes with available pods: 1 Jun 15 11:37:41.348: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:42.054: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:42.057: INFO: Number of nodes with available pods: 1 Jun 15 11:37:42.057: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:42.886: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:43.162: INFO: Number of nodes with available pods: 1 Jun 15 11:37:43.162: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:43.438: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:43.695: INFO: Number of nodes with available pods: 1 Jun 15 11:37:43.695: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:44.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:44.424: INFO: Number of nodes with available pods: 1 Jun 15 11:37:44.424: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:45.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:45.426: INFO: Number of nodes with available pods: 1 Jun 15 11:37:45.426: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:46.455: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:46.458: INFO: Number of nodes with available pods: 1 Jun 15 11:37:46.458: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:47.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:47.426: INFO: Number of nodes with available pods: 1 Jun 15 11:37:47.426: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:48.492: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:48.536: INFO: Number of nodes with available pods: 1 Jun 15 11:37:48.536: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:49.646: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:49.979: INFO: Number of nodes with available pods: 1 Jun 15 11:37:49.979: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:50.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:50.424: INFO: Number of nodes with available pods: 1 Jun 15 11:37:50.424: INFO: Node hunter-worker is running more than one daemon pod Jun 15 11:37:51.516: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 11:37:51.521: INFO: Number of nodes with available pods: 2 Jun 15 11:37:51.521: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-fx7b9, will wait for the garbage collector to delete the pods Jun 15 11:37:51.583: INFO: Deleting DaemonSet.extensions daemon-set took: 5.688443ms Jun 15 11:37:51.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.235822ms Jun 15 11:37:58.538: INFO: Number of nodes with available pods: 0 Jun 15 11:37:58.538: INFO: Number of running nodes: 0, number of available pods: 0 Jun 15 11:37:58.540: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-fx7b9/daemonsets","resourceVersion":"16072189"},"items":null} Jun 15 11:37:58.542: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-fx7b9/pods","resourceVersion":"16072189"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:37:58.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-fx7b9" for this suite. Jun 15 11:38:08.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:38:08.596: INFO: namespace: e2e-tests-daemonsets-fx7b9, resource: bindings, ignored listing per whitelist Jun 15 11:38:08.648: INFO: namespace e2e-tests-daemonsets-fx7b9 deletion completed in 10.09441139s • [SLOW TEST:46.476 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:38:08.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jun 15 11:38:09.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qs9h5' Jun 15 11:38:15.524: INFO: stderr: "" Jun 15 11:38:15.524: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jun 15 11:38:16.527: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:16.528: INFO: Found 0 / 1 Jun 15 11:38:17.589: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:17.589: INFO: Found 0 / 1 Jun 15 11:38:18.528: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:18.528: INFO: Found 0 / 1 Jun 15 11:38:19.599: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:19.599: INFO: Found 0 / 1 Jun 15 11:38:20.802: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:20.802: INFO: Found 0 / 1 Jun 15 11:38:22.791: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:22.791: INFO: Found 0 / 1 Jun 15 11:38:26.354: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:26.354: INFO: Found 0 / 1 Jun 15 11:38:26.575: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:26.575: INFO: Found 0 / 1 Jun 15 11:38:27.527: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:27.527: INFO: Found 0 / 1 Jun 15 11:38:29.312: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:29.313: INFO: Found 0 / 1 Jun 15 11:38:29.528: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:29.528: INFO: Found 0 / 1 Jun 15 11:38:30.725: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:30.725: INFO: Found 0 / 1 Jun 15 11:38:31.527: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:31.527: INFO: Found 0 / 1 Jun 15 11:38:32.528: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:32.528: INFO: Found 1 / 1 Jun 15 11:38:32.528: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 15 11:38:32.531: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:38:32.531: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 15 11:38:32.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rc8kw redis-master --namespace=e2e-tests-kubectl-qs9h5' Jun 15 11:38:32.632: INFO: stderr: "" Jun 15 11:38:32.632: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Jun 11:38:31.213 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Jun 11:38:31.216 # Server started, Redis version 3.2.12\n1:M 15 Jun 11:38:31.216 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Jun 11:38:31.216 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 15 11:38:32.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rc8kw redis-master --namespace=e2e-tests-kubectl-qs9h5 --tail=1' Jun 15 11:38:32.721: INFO: stderr: "" Jun 15 11:38:32.721: INFO: stdout: "1:M 15 Jun 11:38:31.216 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 15 11:38:32.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rc8kw redis-master --namespace=e2e-tests-kubectl-qs9h5 --limit-bytes=1' Jun 15 11:38:32.817: INFO: stderr: "" Jun 15 11:38:32.817: INFO: stdout: " " STEP: exposing timestamps Jun 15 11:38:32.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rc8kw redis-master --namespace=e2e-tests-kubectl-qs9h5 --tail=1 --timestamps' Jun 15 11:38:32.919: INFO: stderr: "" Jun 15 11:38:32.919: INFO: stdout: "2020-06-15T11:38:31.216499549Z 1:M 15 Jun 11:38:31.216 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 15 11:38:35.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rc8kw redis-master --namespace=e2e-tests-kubectl-qs9h5 --since=1s' Jun 15 11:38:35.534: INFO: stderr: "" Jun 15 11:38:35.534: INFO: stdout: "" Jun 15 11:38:35.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-rc8kw redis-master --namespace=e2e-tests-kubectl-qs9h5 --since=24h' Jun 15 11:38:35.636: INFO: stderr: "" Jun 15 11:38:35.636: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Jun 11:38:31.213 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Jun 11:38:31.216 # Server started, Redis version 3.2.12\n1:M 15 Jun 11:38:31.216 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Jun 11:38:31.216 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jun 15 11:38:35.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qs9h5' Jun 15 11:38:35.730: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 11:38:35.730: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 15 11:38:35.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-qs9h5' Jun 15 11:38:35.837: INFO: stderr: "No resources found.\n" Jun 15 11:38:35.837: INFO: stdout: "" Jun 15 11:38:35.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-qs9h5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 15 11:38:35.916: INFO: stderr: "" Jun 15 11:38:35.917: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:38:35.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qs9h5" for this suite. Jun 15 11:38:44.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:38:44.143: INFO: namespace: e2e-tests-kubectl-qs9h5, resource: bindings, ignored listing per whitelist Jun 15 11:38:44.159: INFO: namespace e2e-tests-kubectl-qs9h5 deletion completed in 8.239936442s • [SLOW TEST:35.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:38:44.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-dhb78 I0615 11:38:44.275503 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-dhb78, replica count: 1 I0615 11:38:45.325949 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0615 11:38:46.326205 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0615 11:38:47.326425 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0615 11:38:48.326604 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 15 11:38:48.449: INFO: Created: latency-svc-qzmw6 Jun 15 11:38:48.471: INFO: Got endpoints: latency-svc-qzmw6 [44.504582ms] Jun 15 11:38:48.499: INFO: Created: latency-svc-npbfg Jun 15 11:38:48.557: INFO: Got endpoints: latency-svc-npbfg [85.63976ms] Jun 15 11:38:48.599: INFO: Created: latency-svc-bhdq4 Jun 15 11:38:48.611: INFO: Got endpoints: latency-svc-bhdq4 [139.943605ms] Jun 15 11:38:48.632: INFO: Created: latency-svc-sp87l Jun 15 11:38:48.648: INFO: Got endpoints: latency-svc-sp87l [176.136676ms] Jun 15 11:38:48.683: INFO: Created: latency-svc-r78z7 Jun 15 11:38:48.716: INFO: Got endpoints: latency-svc-r78z7 [244.068694ms] Jun 15 11:38:48.754: INFO: Created: latency-svc-pk48h Jun 15 11:38:48.777: INFO: Got endpoints: latency-svc-pk48h [305.574549ms] Jun 15 11:38:48.832: INFO: Created: latency-svc-9wjwv Jun 15 11:38:48.835: INFO: Got endpoints: latency-svc-9wjwv [362.85567ms] Jun 15 11:38:48.885: INFO: Created: latency-svc-9t9sv Jun 15 11:38:48.898: INFO: Got endpoints: latency-svc-9t9sv [425.851249ms] Jun 15 11:38:48.917: INFO: Created: latency-svc-zw9b7 Jun 15 11:38:48.958: INFO: Got endpoints: latency-svc-zw9b7 [485.824944ms] Jun 15 11:38:48.971: INFO: Created: latency-svc-j8hks Jun 15 11:38:48.985: INFO: Got endpoints: latency-svc-j8hks [512.228771ms] Jun 15 11:38:49.005: INFO: Created: latency-svc-42qdk Jun 15 11:38:49.015: INFO: Got endpoints: latency-svc-42qdk [542.668907ms] Jun 15 11:38:49.035: INFO: Created: latency-svc-g4kvs Jun 15 11:38:49.045: INFO: Got endpoints: latency-svc-g4kvs [572.73497ms] Jun 15 11:38:49.102: INFO: Created: latency-svc-mfw7v Jun 15 11:38:49.105: INFO: Got endpoints: latency-svc-mfw7v [631.919662ms] Jun 15 11:38:49.140: INFO: Created: latency-svc-knmn9 Jun 15 11:38:49.170: INFO: Got endpoints: latency-svc-knmn9 [696.62823ms] Jun 15 11:38:49.194: INFO: Created: latency-svc-52g4k Jun 15 11:38:49.234: INFO: Got endpoints: latency-svc-52g4k [760.50379ms] Jun 15 11:38:49.245: INFO: Created: latency-svc-8b4jx Jun 15 11:38:49.260: INFO: Got endpoints: latency-svc-8b4jx [786.942733ms] Jun 15 11:38:49.289: INFO: Created: latency-svc-jzqsd Jun 15 11:38:49.296: INFO: Got endpoints: latency-svc-jzqsd [738.983796ms] Jun 15 11:38:49.366: INFO: Created: latency-svc-cb5pn Jun 15 11:38:49.369: INFO: Got endpoints: latency-svc-cb5pn [757.837106ms] Jun 15 11:38:49.397: INFO: Created: latency-svc-kj887 Jun 15 11:38:49.411: INFO: Got endpoints: latency-svc-kj887 [763.721344ms] Jun 15 11:38:49.455: INFO: Created: latency-svc-njm27 Jun 15 11:38:49.497: INFO: Got endpoints: latency-svc-njm27 [781.061024ms] Jun 15 11:38:49.509: INFO: Created: latency-svc-2r6zd Jun 15 11:38:49.526: INFO: Got endpoints: latency-svc-2r6zd [748.133619ms] Jun 15 11:38:49.583: INFO: Created: latency-svc-wrkwx Jun 15 11:38:49.665: INFO: Got endpoints: latency-svc-wrkwx [830.010235ms] Jun 15 11:38:49.685: INFO: Created: latency-svc-hfm5b Jun 15 11:38:49.694: INFO: Got endpoints: latency-svc-hfm5b [795.674533ms] Jun 15 11:38:50.195: INFO: Created: latency-svc-cgz69 Jun 15 11:38:50.258: INFO: Got endpoints: latency-svc-cgz69 [1.299596107s] Jun 15 11:38:50.762: INFO: Created: latency-svc-kqjbk Jun 15 11:38:50.790: INFO: Got endpoints: latency-svc-kqjbk [1.805438137s] Jun 15 11:38:50.814: INFO: Created: latency-svc-x4jjz Jun 15 11:38:50.830: INFO: Got endpoints: latency-svc-x4jjz [1.814504859s] Jun 15 11:38:50.854: INFO: Created: latency-svc-b8gmx Jun 15 11:38:50.904: INFO: Got endpoints: latency-svc-b8gmx [1.858480687s] Jun 15 11:38:51.395: INFO: Created: latency-svc-w8lkd Jun 15 11:38:51.416: INFO: Got endpoints: latency-svc-w8lkd [2.310817548s] Jun 15 11:38:51.468: INFO: Created: latency-svc-kb8gg Jun 15 11:38:51.468: INFO: Got endpoints: latency-svc-kb8gg [2.298634526s] Jun 15 11:38:51.492: INFO: Created: latency-svc-2pj89 Jun 15 11:38:51.508: INFO: Got endpoints: latency-svc-2pj89 [2.274294302s] Jun 15 11:38:51.629: INFO: Created: latency-svc-nrgnd Jun 15 11:38:51.676: INFO: Got endpoints: latency-svc-nrgnd [2.415302332s] Jun 15 11:38:51.702: INFO: Created: latency-svc-r4z9d Jun 15 11:38:51.772: INFO: Got endpoints: latency-svc-r4z9d [2.476203042s] Jun 15 11:38:51.799: INFO: Created: latency-svc-q8gf2 Jun 15 11:38:51.820: INFO: Got endpoints: latency-svc-q8gf2 [2.451340038s] Jun 15 11:38:51.850: INFO: Created: latency-svc-r42qw Jun 15 11:38:51.862: INFO: Got endpoints: latency-svc-r42qw [2.450694828s] Jun 15 11:38:51.922: INFO: Created: latency-svc-n7rnb Jun 15 11:38:51.925: INFO: Got endpoints: latency-svc-n7rnb [2.428341811s] Jun 15 11:38:51.959: INFO: Created: latency-svc-fbrlr Jun 15 11:38:51.971: INFO: Got endpoints: latency-svc-fbrlr [2.445176188s] Jun 15 11:38:51.992: INFO: Created: latency-svc-cpn65 Jun 15 11:38:52.021: INFO: Got endpoints: latency-svc-cpn65 [2.356476932s] Jun 15 11:38:52.096: INFO: Created: latency-svc-9clkz Jun 15 11:38:52.104: INFO: Got endpoints: latency-svc-9clkz [2.410187422s] Jun 15 11:38:52.124: INFO: Created: latency-svc-ctf47 Jun 15 11:38:52.140: INFO: Got endpoints: latency-svc-ctf47 [1.882042269s] Jun 15 11:38:52.163: INFO: Created: latency-svc-9crtt Jun 15 11:38:52.176: INFO: Got endpoints: latency-svc-9crtt [1.385592534s] Jun 15 11:38:52.193: INFO: Created: latency-svc-7mpng Jun 15 11:38:52.272: INFO: Got endpoints: latency-svc-7mpng [1.441889567s] Jun 15 11:38:52.272: INFO: Created: latency-svc-gnws2 Jun 15 11:38:52.316: INFO: Got endpoints: latency-svc-gnws2 [1.411587944s] Jun 15 11:38:52.355: INFO: Created: latency-svc-m6cgw Jun 15 11:38:52.413: INFO: Got endpoints: latency-svc-m6cgw [997.686569ms] Jun 15 11:38:52.415: INFO: Created: latency-svc-c94mw Jun 15 11:38:52.429: INFO: Got endpoints: latency-svc-c94mw [960.96945ms] Jun 15 11:38:52.457: INFO: Created: latency-svc-tqsl2 Jun 15 11:38:52.471: INFO: Got endpoints: latency-svc-tqsl2 [963.153066ms] Jun 15 11:38:52.551: INFO: Created: latency-svc-l8xwb Jun 15 11:38:52.553: INFO: Got endpoints: latency-svc-l8xwb [877.769097ms] Jun 15 11:38:52.591: INFO: Created: latency-svc-wmqdx Jun 15 11:38:52.619: INFO: Got endpoints: latency-svc-wmqdx [846.961638ms] Jun 15 11:38:52.640: INFO: Created: latency-svc-p8rx7 Jun 15 11:38:52.707: INFO: Got endpoints: latency-svc-p8rx7 [886.300871ms] Jun 15 11:38:52.727: INFO: Created: latency-svc-xsmm2 Jun 15 11:38:52.758: INFO: Got endpoints: latency-svc-xsmm2 [895.861043ms] Jun 15 11:38:52.799: INFO: Created: latency-svc-8k2wl Jun 15 11:38:52.856: INFO: Got endpoints: latency-svc-8k2wl [931.032913ms] Jun 15 11:38:52.860: INFO: Created: latency-svc-s8bkg Jun 15 11:38:52.867: INFO: Got endpoints: latency-svc-s8bkg [895.812033ms] Jun 15 11:38:52.888: INFO: Created: latency-svc-9fn6d Jun 15 11:38:52.896: INFO: Got endpoints: latency-svc-9fn6d [874.922735ms] Jun 15 11:38:52.919: INFO: Created: latency-svc-p8q7b Jun 15 11:38:52.933: INFO: Got endpoints: latency-svc-p8q7b [828.983699ms] Jun 15 11:38:52.955: INFO: Created: latency-svc-54jgd Jun 15 11:38:52.994: INFO: Got endpoints: latency-svc-54jgd [854.322152ms] Jun 15 11:38:53.010: INFO: Created: latency-svc-qzfcq Jun 15 11:38:53.024: INFO: Got endpoints: latency-svc-qzfcq [847.883875ms] Jun 15 11:38:53.072: INFO: Created: latency-svc-mqtcg Jun 15 11:38:53.084: INFO: Got endpoints: latency-svc-mqtcg [812.791454ms] Jun 15 11:38:53.145: INFO: Created: latency-svc-c4rwz Jun 15 11:38:53.147: INFO: Got endpoints: latency-svc-c4rwz [831.750257ms] Jun 15 11:38:53.171: INFO: Created: latency-svc-wb7g8 Jun 15 11:38:53.192: INFO: Got endpoints: latency-svc-wb7g8 [778.98222ms] Jun 15 11:38:53.233: INFO: Created: latency-svc-kbdhs Jun 15 11:38:53.300: INFO: Got endpoints: latency-svc-kbdhs [870.130591ms] Jun 15 11:38:53.306: INFO: Created: latency-svc-g7kz5 Jun 15 11:38:53.319: INFO: Got endpoints: latency-svc-g7kz5 [847.597142ms] Jun 15 11:38:53.347: INFO: Created: latency-svc-2zxbd Jun 15 11:38:53.361: INFO: Got endpoints: latency-svc-2zxbd [807.896685ms] Jun 15 11:38:53.393: INFO: Created: latency-svc-rb7g9 Jun 15 11:38:53.449: INFO: Got endpoints: latency-svc-rb7g9 [829.95277ms] Jun 15 11:38:53.474: INFO: Created: latency-svc-xdm6v Jun 15 11:38:53.511: INFO: Got endpoints: latency-svc-xdm6v [804.735399ms] Jun 15 11:38:53.535: INFO: Created: latency-svc-q4c8m Jun 15 11:38:53.548: INFO: Got endpoints: latency-svc-q4c8m [789.640054ms] Jun 15 11:38:53.606: INFO: Created: latency-svc-jz2xg Jun 15 11:38:53.611: INFO: Got endpoints: latency-svc-jz2xg [754.469061ms] Jun 15 11:38:53.646: INFO: Created: latency-svc-fj4qz Jun 15 11:38:53.684: INFO: Got endpoints: latency-svc-fj4qz [817.804683ms] Jun 15 11:38:53.743: INFO: Created: latency-svc-jp45j Jun 15 11:38:53.749: INFO: Got endpoints: latency-svc-jp45j [853.031612ms] Jun 15 11:38:53.785: INFO: Created: latency-svc-bzwbg Jun 15 11:38:53.792: INFO: Got endpoints: latency-svc-bzwbg [858.884422ms] Jun 15 11:38:53.815: INFO: Created: latency-svc-wczld Jun 15 11:38:53.828: INFO: Got endpoints: latency-svc-wczld [834.192616ms] Jun 15 11:38:53.893: INFO: Created: latency-svc-kq2ql Jun 15 11:38:53.908: INFO: Got endpoints: latency-svc-kq2ql [884.233021ms] Jun 15 11:38:53.936: INFO: Created: latency-svc-ppwrw Jun 15 11:38:53.963: INFO: Got endpoints: latency-svc-ppwrw [878.545489ms] Jun 15 11:38:54.036: INFO: Created: latency-svc-j95w6 Jun 15 11:38:54.045: INFO: Got endpoints: latency-svc-j95w6 [897.947101ms] Jun 15 11:38:54.065: INFO: Created: latency-svc-m4grr Jun 15 11:38:54.082: INFO: Got endpoints: latency-svc-m4grr [889.489487ms] Jun 15 11:38:54.114: INFO: Created: latency-svc-sz8mt Jun 15 11:38:54.136: INFO: Got endpoints: latency-svc-sz8mt [836.123092ms] Jun 15 11:38:54.198: INFO: Created: latency-svc-87brq Jun 15 11:38:54.202: INFO: Got endpoints: latency-svc-87brq [882.841305ms] Jun 15 11:38:54.243: INFO: Created: latency-svc-phbkn Jun 15 11:38:54.256: INFO: Got endpoints: latency-svc-phbkn [894.913656ms] Jun 15 11:38:54.360: INFO: Created: latency-svc-t74vk Jun 15 11:38:54.363: INFO: Got endpoints: latency-svc-t74vk [913.162008ms] Jun 15 11:38:54.399: INFO: Created: latency-svc-dd2l8 Jun 15 11:38:54.413: INFO: Got endpoints: latency-svc-dd2l8 [901.783238ms] Jun 15 11:38:54.435: INFO: Created: latency-svc-9b2sw Jun 15 11:38:54.449: INFO: Got endpoints: latency-svc-9b2sw [901.445036ms] Jun 15 11:38:54.510: INFO: Created: latency-svc-vgv5n Jun 15 11:38:54.540: INFO: Got endpoints: latency-svc-vgv5n [928.806669ms] Jun 15 11:38:54.591: INFO: Created: latency-svc-8qm4l Jun 15 11:38:54.606: INFO: Got endpoints: latency-svc-8qm4l [921.110022ms] Jun 15 11:38:54.671: INFO: Created: latency-svc-rgwmm Jun 15 11:38:54.678: INFO: Got endpoints: latency-svc-rgwmm [928.357519ms] Jun 15 11:38:54.702: INFO: Created: latency-svc-hfr66 Jun 15 11:38:54.708: INFO: Got endpoints: latency-svc-hfr66 [102.181424ms] Jun 15 11:38:54.748: INFO: Created: latency-svc-drfw2 Jun 15 11:38:54.756: INFO: Got endpoints: latency-svc-drfw2 [964.569769ms] Jun 15 11:38:54.809: INFO: Created: latency-svc-rkhz9 Jun 15 11:38:54.823: INFO: Got endpoints: latency-svc-rkhz9 [994.127524ms] Jun 15 11:38:54.850: INFO: Created: latency-svc-f9jk8 Jun 15 11:38:54.859: INFO: Got endpoints: latency-svc-f9jk8 [950.790665ms] Jun 15 11:38:54.881: INFO: Created: latency-svc-qggrt Jun 15 11:38:54.883: INFO: Got endpoints: latency-svc-qggrt [919.725826ms] Jun 15 11:38:54.947: INFO: Created: latency-svc-fxzm4 Jun 15 11:38:54.950: INFO: Got endpoints: latency-svc-fxzm4 [904.78345ms] Jun 15 11:38:54.977: INFO: Created: latency-svc-jgcgv Jun 15 11:38:54.980: INFO: Got endpoints: latency-svc-jgcgv [897.858495ms] Jun 15 11:38:55.022: INFO: Created: latency-svc-8l6hd Jun 15 11:38:55.034: INFO: Got endpoints: latency-svc-8l6hd [898.35646ms] Jun 15 11:38:55.103: INFO: Created: latency-svc-6ptcz Jun 15 11:38:55.105: INFO: Got endpoints: latency-svc-6ptcz [903.146483ms] Jun 15 11:38:55.136: INFO: Created: latency-svc-68q7x Jun 15 11:38:55.148: INFO: Got endpoints: latency-svc-68q7x [891.865599ms] Jun 15 11:38:55.162: INFO: Created: latency-svc-9vtnl Jun 15 11:38:55.173: INFO: Got endpoints: latency-svc-9vtnl [810.737447ms] Jun 15 11:38:55.193: INFO: Created: latency-svc-dh4xq Jun 15 11:38:55.252: INFO: Got endpoints: latency-svc-dh4xq [838.733937ms] Jun 15 11:38:55.254: INFO: Created: latency-svc-z9wms Jun 15 11:38:55.258: INFO: Got endpoints: latency-svc-z9wms [808.406759ms] Jun 15 11:38:55.286: INFO: Created: latency-svc-869zk Jun 15 11:38:55.300: INFO: Got endpoints: latency-svc-869zk [760.404804ms] Jun 15 11:38:55.323: INFO: Created: latency-svc-9rxhb Jun 15 11:38:55.336: INFO: Got endpoints: latency-svc-9rxhb [658.374681ms] Jun 15 11:38:55.396: INFO: Created: latency-svc-nf6jn Jun 15 11:38:55.398: INFO: Got endpoints: latency-svc-nf6jn [690.582629ms] Jun 15 11:38:55.439: INFO: Created: latency-svc-d9dsz Jun 15 11:38:55.457: INFO: Got endpoints: latency-svc-d9dsz [700.664756ms] Jun 15 11:38:55.570: INFO: Created: latency-svc-dh8js Jun 15 11:38:55.574: INFO: Got endpoints: latency-svc-dh8js [751.293577ms] Jun 15 11:38:55.610: INFO: Created: latency-svc-6x247 Jun 15 11:38:55.662: INFO: Got endpoints: latency-svc-6x247 [803.418661ms] Jun 15 11:38:55.755: INFO: Created: latency-svc-vblff Jun 15 11:38:55.794: INFO: Got endpoints: latency-svc-vblff [911.3875ms] Jun 15 11:38:55.833: INFO: Created: latency-svc-xpg2b Jun 15 11:38:55.848: INFO: Got endpoints: latency-svc-xpg2b [897.662682ms] Jun 15 11:38:55.923: INFO: Created: latency-svc-5hm8q Jun 15 11:38:55.925: INFO: Got endpoints: latency-svc-5hm8q [945.289594ms] Jun 15 11:38:55.981: INFO: Created: latency-svc-kjccs Jun 15 11:38:55.992: INFO: Got endpoints: latency-svc-kjccs [958.34169ms] Jun 15 11:38:56.022: INFO: Created: latency-svc-5h5g6 Jun 15 11:38:56.066: INFO: Got endpoints: latency-svc-5h5g6 [961.015191ms] Jun 15 11:38:56.079: INFO: Created: latency-svc-8f7p9 Jun 15 11:38:56.095: INFO: Got endpoints: latency-svc-8f7p9 [946.834652ms] Jun 15 11:38:56.122: INFO: Created: latency-svc-mn2lb Jun 15 11:38:56.150: INFO: Got endpoints: latency-svc-mn2lb [976.776594ms] Jun 15 11:38:56.240: INFO: Created: latency-svc-pm98z Jun 15 11:38:56.246: INFO: Got endpoints: latency-svc-pm98z [993.627647ms] Jun 15 11:38:56.277: INFO: Created: latency-svc-nt7x8 Jun 15 11:38:56.319: INFO: Got endpoints: latency-svc-nt7x8 [1.061125938s] Jun 15 11:38:56.396: INFO: Created: latency-svc-dmrqd Jun 15 11:38:56.414: INFO: Got endpoints: latency-svc-dmrqd [1.114230129s] Jun 15 11:38:56.449: INFO: Created: latency-svc-5dnsc Jun 15 11:38:56.462: INFO: Got endpoints: latency-svc-5dnsc [1.125934364s] Jun 15 11:38:56.546: INFO: Created: latency-svc-rglpp Jun 15 11:38:56.552: INFO: Got endpoints: latency-svc-rglpp [1.153576893s] Jun 15 11:38:56.571: INFO: Created: latency-svc-6zfp9 Jun 15 11:38:56.584: INFO: Got endpoints: latency-svc-6zfp9 [1.126425327s] Jun 15 11:38:56.628: INFO: Created: latency-svc-mcd77 Jun 15 11:38:56.643: INFO: Got endpoints: latency-svc-mcd77 [1.068781162s] Jun 15 11:38:56.695: INFO: Created: latency-svc-sxqfg Jun 15 11:38:56.703: INFO: Got endpoints: latency-svc-sxqfg [1.040813435s] Jun 15 11:38:56.733: INFO: Created: latency-svc-k5pxf Jun 15 11:38:56.745: INFO: Got endpoints: latency-svc-k5pxf [951.16581ms] Jun 15 11:38:56.781: INFO: Created: latency-svc-9d948 Jun 15 11:38:56.841: INFO: Got endpoints: latency-svc-9d948 [992.544094ms] Jun 15 11:38:56.871: INFO: Created: latency-svc-6zkt8 Jun 15 11:38:56.900: INFO: Got endpoints: latency-svc-6zkt8 [974.456228ms] Jun 15 11:38:56.922: INFO: Created: latency-svc-nkfmx Jun 15 11:38:56.933: INFO: Got endpoints: latency-svc-nkfmx [939.953653ms] Jun 15 11:38:56.983: INFO: Created: latency-svc-nsn98 Jun 15 11:38:56.987: INFO: Got endpoints: latency-svc-nsn98 [920.531538ms] Jun 15 11:38:57.015: INFO: Created: latency-svc-7hdg8 Jun 15 11:38:57.029: INFO: Got endpoints: latency-svc-7hdg8 [933.98709ms] Jun 15 11:38:57.063: INFO: Created: latency-svc-7zc5n Jun 15 11:38:57.078: INFO: Got endpoints: latency-svc-7zc5n [927.51526ms] Jun 15 11:38:57.138: INFO: Created: latency-svc-d9bkq Jun 15 11:38:57.166: INFO: Got endpoints: latency-svc-d9bkq [920.260992ms] Jun 15 11:38:57.211: INFO: Created: latency-svc-qgtgx Jun 15 11:38:57.264: INFO: Got endpoints: latency-svc-qgtgx [944.753811ms] Jun 15 11:38:57.285: INFO: Created: latency-svc-45gq8 Jun 15 11:38:57.315: INFO: Got endpoints: latency-svc-45gq8 [900.047128ms] Jun 15 11:38:57.342: INFO: Created: latency-svc-5m6nc Jun 15 11:38:57.360: INFO: Got endpoints: latency-svc-5m6nc [898.159253ms] Jun 15 11:38:57.408: INFO: Created: latency-svc-6qwvs Jun 15 11:38:57.421: INFO: Got endpoints: latency-svc-6qwvs [868.62843ms] Jun 15 11:38:57.444: INFO: Created: latency-svc-sbhw7 Jun 15 11:38:57.457: INFO: Got endpoints: latency-svc-sbhw7 [873.273963ms] Jun 15 11:38:57.477: INFO: Created: latency-svc-r96rz Jun 15 11:38:57.500: INFO: Got endpoints: latency-svc-r96rz [856.626885ms] Jun 15 11:38:57.551: INFO: Created: latency-svc-mh6tr Jun 15 11:38:57.560: INFO: Got endpoints: latency-svc-mh6tr [856.775472ms] Jun 15 11:38:57.594: INFO: Created: latency-svc-769xv Jun 15 11:38:57.608: INFO: Got endpoints: latency-svc-769xv [862.518585ms] Jun 15 11:38:57.630: INFO: Created: latency-svc-bdntz Jun 15 11:38:57.650: INFO: Got endpoints: latency-svc-bdntz [809.578413ms] Jun 15 11:38:57.725: INFO: Created: latency-svc-cgqvf Jun 15 11:38:57.740: INFO: Got endpoints: latency-svc-cgqvf [840.60643ms] Jun 15 11:38:57.771: INFO: Created: latency-svc-4dh72 Jun 15 11:38:57.783: INFO: Got endpoints: latency-svc-4dh72 [850.145779ms] Jun 15 11:38:57.804: INFO: Created: latency-svc-2zbc2 Jun 15 11:38:57.862: INFO: Got endpoints: latency-svc-2zbc2 [875.85724ms] Jun 15 11:38:57.879: INFO: Created: latency-svc-2fpsk Jun 15 11:38:57.909: INFO: Got endpoints: latency-svc-2fpsk [880.217654ms] Jun 15 11:38:57.944: INFO: Created: latency-svc-jtsnh Jun 15 11:38:57.957: INFO: Got endpoints: latency-svc-jtsnh [879.652507ms] Jun 15 11:38:58.025: INFO: Created: latency-svc-rw7d6 Jun 15 11:38:58.032: INFO: Got endpoints: latency-svc-rw7d6 [865.882422ms] Jun 15 11:38:58.062: INFO: Created: latency-svc-vknlz Jun 15 11:38:58.082: INFO: Got endpoints: latency-svc-vknlz [818.532797ms] Jun 15 11:38:58.113: INFO: Created: latency-svc-4rcg6 Jun 15 11:38:58.162: INFO: Got endpoints: latency-svc-4rcg6 [847.058115ms] Jun 15 11:38:58.182: INFO: Created: latency-svc-7cmc7 Jun 15 11:38:58.193: INFO: Got endpoints: latency-svc-7cmc7 [832.424817ms] Jun 15 11:38:58.224: INFO: Created: latency-svc-kwcfc Jun 15 11:38:58.247: INFO: Got endpoints: latency-svc-kwcfc [826.197613ms] Jun 15 11:38:58.372: INFO: Created: latency-svc-ww5m2 Jun 15 11:38:58.391: INFO: Got endpoints: latency-svc-ww5m2 [934.119064ms] Jun 15 11:38:58.428: INFO: Created: latency-svc-bdvfn Jun 15 11:38:58.439: INFO: Got endpoints: latency-svc-bdvfn [939.844906ms] Jun 15 11:38:58.466: INFO: Created: latency-svc-b9lzt Jun 15 11:38:58.552: INFO: Got endpoints: latency-svc-b9lzt [991.895029ms] Jun 15 11:38:58.553: INFO: Created: latency-svc-gtscv Jun 15 11:38:58.586: INFO: Created: latency-svc-wlg8b Jun 15 11:38:58.619: INFO: Got endpoints: latency-svc-gtscv [1.011314565s] Jun 15 11:38:58.620: INFO: Created: latency-svc-vjgpx Jun 15 11:38:58.632: INFO: Got endpoints: latency-svc-vjgpx [891.949131ms] Jun 15 11:38:58.731: INFO: Got endpoints: latency-svc-wlg8b [1.080660601s] Jun 15 11:38:58.731: INFO: Created: latency-svc-6w75w Jun 15 11:38:58.734: INFO: Got endpoints: latency-svc-6w75w [950.883002ms] Jun 15 11:38:58.803: INFO: Created: latency-svc-n6vf6 Jun 15 11:38:58.825: INFO: Got endpoints: latency-svc-n6vf6 [962.799404ms] Jun 15 11:38:58.887: INFO: Created: latency-svc-9jw5w Jun 15 11:38:58.892: INFO: Got endpoints: latency-svc-9jw5w [982.055846ms] Jun 15 11:38:58.920: INFO: Created: latency-svc-mv6hm Jun 15 11:38:58.934: INFO: Got endpoints: latency-svc-mv6hm [976.125295ms] Jun 15 11:38:58.980: INFO: Created: latency-svc-qv5lm Jun 15 11:38:59.043: INFO: Got endpoints: latency-svc-qv5lm [1.010754536s] Jun 15 11:38:59.044: INFO: Created: latency-svc-b9mgp Jun 15 11:38:59.048: INFO: Got endpoints: latency-svc-b9mgp [965.845347ms] Jun 15 11:38:59.079: INFO: Created: latency-svc-5dbwf Jun 15 11:38:59.097: INFO: Got endpoints: latency-svc-5dbwf [935.268968ms] Jun 15 11:38:59.118: INFO: Created: latency-svc-sptvp Jun 15 11:38:59.133: INFO: Got endpoints: latency-svc-sptvp [940.23562ms] Jun 15 11:38:59.208: INFO: Created: latency-svc-nrl8p Jun 15 11:38:59.208: INFO: Got endpoints: latency-svc-nrl8p [960.810033ms] Jun 15 11:38:59.237: INFO: Created: latency-svc-tq6cd Jun 15 11:38:59.254: INFO: Got endpoints: latency-svc-tq6cd [862.40279ms] Jun 15 11:38:59.276: INFO: Created: latency-svc-q6nrc Jun 15 11:38:59.296: INFO: Got endpoints: latency-svc-q6nrc [856.31097ms] Jun 15 11:38:59.378: INFO: Created: latency-svc-dxplj Jun 15 11:38:59.381: INFO: Got endpoints: latency-svc-dxplj [828.448964ms] Jun 15 11:38:59.534: INFO: Created: latency-svc-8jg5w Jun 15 11:38:59.537: INFO: Got endpoints: latency-svc-8jg5w [917.069253ms] Jun 15 11:38:59.587: INFO: Created: latency-svc-tgpxc Jun 15 11:38:59.596: INFO: Got endpoints: latency-svc-tgpxc [963.729874ms] Jun 15 11:38:59.618: INFO: Created: latency-svc-5rmsp Jun 15 11:38:59.621: INFO: Got endpoints: latency-svc-5rmsp [889.533784ms] Jun 15 11:38:59.710: INFO: Created: latency-svc-tzvf8 Jun 15 11:38:59.712: INFO: Got endpoints: latency-svc-tzvf8 [978.757165ms] Jun 15 11:38:59.748: INFO: Created: latency-svc-995rn Jun 15 11:38:59.775: INFO: Got endpoints: latency-svc-995rn [949.860716ms] Jun 15 11:38:59.827: INFO: Created: latency-svc-kktxp Jun 15 11:38:59.859: INFO: Got endpoints: latency-svc-kktxp [967.834ms] Jun 15 11:38:59.908: INFO: Created: latency-svc-sh8mj Jun 15 11:39:00.006: INFO: Got endpoints: latency-svc-sh8mj [1.072669145s] Jun 15 11:39:00.018: INFO: Created: latency-svc-r6786 Jun 15 11:39:00.030: INFO: Got endpoints: latency-svc-r6786 [987.484256ms] Jun 15 11:39:00.060: INFO: Created: latency-svc-pjz5p Jun 15 11:39:00.072: INFO: Got endpoints: latency-svc-pjz5p [1.023921289s] Jun 15 11:39:00.100: INFO: Created: latency-svc-mdbpq Jun 15 11:39:00.162: INFO: Got endpoints: latency-svc-mdbpq [1.065120574s] Jun 15 11:39:00.165: INFO: Created: latency-svc-8kdvp Jun 15 11:39:00.169: INFO: Got endpoints: latency-svc-8kdvp [1.036019688s] Jun 15 11:39:00.190: INFO: Created: latency-svc-mgdj7 Jun 15 11:39:00.199: INFO: Got endpoints: latency-svc-mgdj7 [991.086438ms] Jun 15 11:39:00.220: INFO: Created: latency-svc-njnsd Jun 15 11:39:00.230: INFO: Got endpoints: latency-svc-njnsd [975.912225ms] Jun 15 11:39:00.253: INFO: Created: latency-svc-fmx7x Jun 15 11:39:00.299: INFO: Got endpoints: latency-svc-fmx7x [1.003535588s] Jun 15 11:39:00.313: INFO: Created: latency-svc-hqxs9 Jun 15 11:39:00.327: INFO: Got endpoints: latency-svc-hqxs9 [946.387101ms] Jun 15 11:39:00.347: INFO: Created: latency-svc-mmk9n Jun 15 11:39:00.363: INFO: Got endpoints: latency-svc-mmk9n [825.926413ms] Jun 15 11:39:00.397: INFO: Created: latency-svc-7l7fc Jun 15 11:39:00.456: INFO: Got endpoints: latency-svc-7l7fc [859.741524ms] Jun 15 11:39:00.484: INFO: Created: latency-svc-7lmkz Jun 15 11:39:00.523: INFO: Got endpoints: latency-svc-7lmkz [902.216724ms] Jun 15 11:39:00.605: INFO: Created: latency-svc-sd9xt Jun 15 11:39:00.609: INFO: Got endpoints: latency-svc-sd9xt [896.99757ms] Jun 15 11:39:00.650: INFO: Created: latency-svc-ht9w7 Jun 15 11:39:00.658: INFO: Got endpoints: latency-svc-ht9w7 [882.307487ms] Jun 15 11:39:00.683: INFO: Created: latency-svc-5q6tt Jun 15 11:39:00.688: INFO: Got endpoints: latency-svc-5q6tt [828.158258ms] Jun 15 11:39:00.755: INFO: Created: latency-svc-zzh26 Jun 15 11:39:00.766: INFO: Got endpoints: latency-svc-zzh26 [759.659362ms] Jun 15 11:39:00.788: INFO: Created: latency-svc-jntlg Jun 15 11:39:00.803: INFO: Got endpoints: latency-svc-jntlg [772.114775ms] Jun 15 11:39:00.837: INFO: Created: latency-svc-lgcj9 Jun 15 11:39:00.892: INFO: Got endpoints: latency-svc-lgcj9 [820.204075ms] Jun 15 11:39:00.912: INFO: Created: latency-svc-kg5f8 Jun 15 11:39:00.929: INFO: Got endpoints: latency-svc-kg5f8 [767.168953ms] Jun 15 11:39:00.967: INFO: Created: latency-svc-7jr5d Jun 15 11:39:00.978: INFO: Got endpoints: latency-svc-7jr5d [808.294177ms] Jun 15 11:39:01.074: INFO: Created: latency-svc-5tf2w Jun 15 11:39:01.076: INFO: Got endpoints: latency-svc-5tf2w [876.903074ms] Jun 15 11:39:01.119: INFO: Created: latency-svc-lf9sh Jun 15 11:39:01.134: INFO: Got endpoints: latency-svc-lf9sh [904.620065ms] Jun 15 11:39:01.153: INFO: Created: latency-svc-wlq2c Jun 15 11:39:01.164: INFO: Got endpoints: latency-svc-wlq2c [864.975792ms] Jun 15 11:39:01.240: INFO: Created: latency-svc-9pp77 Jun 15 11:39:01.243: INFO: Got endpoints: latency-svc-9pp77 [915.821872ms] Jun 15 11:39:01.287: INFO: Created: latency-svc-tnccd Jun 15 11:39:01.317: INFO: Got endpoints: latency-svc-tnccd [954.32108ms] Jun 15 11:39:01.390: INFO: Created: latency-svc-k46xf Jun 15 11:39:01.398: INFO: Got endpoints: latency-svc-k46xf [941.718892ms] Jun 15 11:39:01.432: INFO: Created: latency-svc-829ns Jun 15 11:39:01.452: INFO: Got endpoints: latency-svc-829ns [929.458407ms] Jun 15 11:39:01.551: INFO: Created: latency-svc-5svgm Jun 15 11:39:01.578: INFO: Got endpoints: latency-svc-5svgm [968.453334ms] Jun 15 11:39:01.579: INFO: Created: latency-svc-6xbqw Jun 15 11:39:01.592: INFO: Got endpoints: latency-svc-6xbqw [934.512892ms] Jun 15 11:39:01.614: INFO: Created: latency-svc-xgz67 Jun 15 11:39:01.622: INFO: Got endpoints: latency-svc-xgz67 [934.589016ms] Jun 15 11:39:01.719: INFO: Created: latency-svc-8nlnk Jun 15 11:39:01.740: INFO: Got endpoints: latency-svc-8nlnk [973.789964ms] Jun 15 11:39:01.791: INFO: Created: latency-svc-tnswt Jun 15 11:39:01.917: INFO: Got endpoints: latency-svc-tnswt [1.113970052s] Jun 15 11:39:01.932: INFO: Created: latency-svc-4tkh9 Jun 15 11:39:01.977: INFO: Got endpoints: latency-svc-4tkh9 [1.084612599s] Jun 15 11:39:02.071: INFO: Created: latency-svc-czmsf Jun 15 11:39:02.085: INFO: Got endpoints: latency-svc-czmsf [1.155927372s] Jun 15 11:39:02.086: INFO: Latencies: [85.63976ms 102.181424ms 139.943605ms 176.136676ms 244.068694ms 305.574549ms 362.85567ms 425.851249ms 485.824944ms 512.228771ms 542.668907ms 572.73497ms 631.919662ms 658.374681ms 690.582629ms 696.62823ms 700.664756ms 738.983796ms 748.133619ms 751.293577ms 754.469061ms 757.837106ms 759.659362ms 760.404804ms 760.50379ms 763.721344ms 767.168953ms 772.114775ms 778.98222ms 781.061024ms 786.942733ms 789.640054ms 795.674533ms 803.418661ms 804.735399ms 807.896685ms 808.294177ms 808.406759ms 809.578413ms 810.737447ms 812.791454ms 817.804683ms 818.532797ms 820.204075ms 825.926413ms 826.197613ms 828.158258ms 828.448964ms 828.983699ms 829.95277ms 830.010235ms 831.750257ms 832.424817ms 834.192616ms 836.123092ms 838.733937ms 840.60643ms 846.961638ms 847.058115ms 847.597142ms 847.883875ms 850.145779ms 853.031612ms 854.322152ms 856.31097ms 856.626885ms 856.775472ms 858.884422ms 859.741524ms 862.40279ms 862.518585ms 864.975792ms 865.882422ms 868.62843ms 870.130591ms 873.273963ms 874.922735ms 875.85724ms 876.903074ms 877.769097ms 878.545489ms 879.652507ms 880.217654ms 882.307487ms 882.841305ms 884.233021ms 886.300871ms 889.489487ms 889.533784ms 891.865599ms 891.949131ms 894.913656ms 895.812033ms 895.861043ms 896.99757ms 897.662682ms 897.858495ms 897.947101ms 898.159253ms 898.35646ms 900.047128ms 901.445036ms 901.783238ms 902.216724ms 903.146483ms 904.620065ms 904.78345ms 911.3875ms 913.162008ms 915.821872ms 917.069253ms 919.725826ms 920.260992ms 920.531538ms 921.110022ms 927.51526ms 928.357519ms 928.806669ms 929.458407ms 931.032913ms 933.98709ms 934.119064ms 934.512892ms 934.589016ms 935.268968ms 939.844906ms 939.953653ms 940.23562ms 941.718892ms 944.753811ms 945.289594ms 946.387101ms 946.834652ms 949.860716ms 950.790665ms 950.883002ms 951.16581ms 954.32108ms 958.34169ms 960.810033ms 960.96945ms 961.015191ms 962.799404ms 963.153066ms 963.729874ms 964.569769ms 965.845347ms 967.834ms 968.453334ms 973.789964ms 974.456228ms 975.912225ms 976.125295ms 976.776594ms 978.757165ms 982.055846ms 987.484256ms 991.086438ms 991.895029ms 992.544094ms 993.627647ms 994.127524ms 997.686569ms 1.003535588s 1.010754536s 1.011314565s 1.023921289s 1.036019688s 1.040813435s 1.061125938s 1.065120574s 1.068781162s 1.072669145s 1.080660601s 1.084612599s 1.113970052s 1.114230129s 1.125934364s 1.126425327s 1.153576893s 1.155927372s 1.299596107s 1.385592534s 1.411587944s 1.441889567s 1.805438137s 1.814504859s 1.858480687s 1.882042269s 2.274294302s 2.298634526s 2.310817548s 2.356476932s 2.410187422s 2.415302332s 2.428341811s 2.445176188s 2.450694828s 2.451340038s 2.476203042s] Jun 15 11:39:02.086: INFO: 50 %ile: 900.047128ms Jun 15 11:39:02.086: INFO: 90 %ile: 1.155927372s Jun 15 11:39:02.086: INFO: 99 %ile: 2.451340038s Jun 15 11:39:02.086: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:39:02.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-dhb78" for this suite. Jun 15 11:41:36.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:41:36.141: INFO: namespace: e2e-tests-svc-latency-dhb78, resource: bindings, ignored listing per whitelist Jun 15 11:41:36.178: INFO: namespace e2e-tests-svc-latency-dhb78 deletion completed in 2m34.086293977s • [SLOW TEST:172.018 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:41:36.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jun 15 11:41:36.247: INFO: Waiting up to 5m0s for pod "client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b" in namespace "e2e-tests-containers-k7wtl" to be "success or failure" Jun 15 11:41:36.266: INFO: Pod "client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.206175ms Jun 15 11:41:38.271: INFO: Pod "client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023526753s Jun 15 11:41:40.338: INFO: Pod "client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091416724s Jun 15 11:41:42.342: INFO: Pod "client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094825795s STEP: Saw pod success Jun 15 11:41:42.342: INFO: Pod "client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:41:42.344: INFO: Trying to get logs from node hunter-worker pod client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 11:41:42.361: INFO: Waiting for pod client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b to disappear Jun 15 11:41:42.366: INFO: Pod client-containers-2b2ae8e8-aefd-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:41:42.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-k7wtl" for this suite. Jun 15 11:41:48.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:41:48.397: INFO: namespace: e2e-tests-containers-k7wtl, resource: bindings, ignored listing per whitelist Jun 15 11:41:48.437: INFO: namespace e2e-tests-containers-k7wtl deletion completed in 6.069213026s • [SLOW TEST:12.259 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:41:48.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 15 11:41:48.521: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 15 11:41:48.531: INFO: Waiting for terminating namespaces to be deleted... Jun 15 11:41:48.532: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 15 11:41:48.537: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 15 11:41:48.537: INFO: Container kube-proxy ready: true, restart count 0 Jun 15 11:41:48.537: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 11:41:48.537: INFO: Container kindnet-cni ready: true, restart count 0 Jun 15 11:41:48.537: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 15 11:41:48.537: INFO: Container coredns ready: true, restart count 0 Jun 15 11:41:48.537: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 15 11:41:48.542: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 11:41:48.542: INFO: Container kindnet-cni ready: true, restart count 0 Jun 15 11:41:48.542: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 15 11:41:48.542: INFO: Container coredns ready: true, restart count 0 Jun 15 11:41:48.542: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 11:41:48.542: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Jun 15 11:41:48.617: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker Jun 15 11:41:48.617: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 Jun 15 11:41:48.617: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker Jun 15 11:41:48.617: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 Jun 15 11:41:48.617: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 Jun 15 11:41:48.617: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-328b5c3f-aefd-11ea-99db-0242ac11001b.1618b4d8218b785d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-96x9g/filler-pod-328b5c3f-aefd-11ea-99db-0242ac11001b to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-328b5c3f-aefd-11ea-99db-0242ac11001b.1618b4d89383994b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-328b5c3f-aefd-11ea-99db-0242ac11001b.1618b4db323be1a4], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-328b5c3f-aefd-11ea-99db-0242ac11001b.1618b4db40d1db8c], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-328bd175-aefd-11ea-99db-0242ac11001b.1618b4d821bca012], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-96x9g/filler-pod-328bd175-aefd-11ea-99db-0242ac11001b to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-328bd175-aefd-11ea-99db-0242ac11001b.1618b4d86e76d08e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-328bd175-aefd-11ea-99db-0242ac11001b.1618b4db2b078e30], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-328bd175-aefd-11ea-99db-0242ac11001b.1618b4db396c562e], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.1618b4dbdc52dfff], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:42:05.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-96x9g" for this suite. Jun 15 11:42:13.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:42:13.909: INFO: namespace: e2e-tests-sched-pred-96x9g, resource: bindings, ignored listing per whitelist Jun 15 11:42:13.948: INFO: namespace e2e-tests-sched-pred-96x9g deletion completed in 8.077537484s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:25.510 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:42:13.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-41ecce64-aefd-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:42:14.495: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-kl55t" to be "success or failure" Jun 15 11:42:15.118: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 622.48219ms Jun 15 11:42:18.403: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907477411s Jun 15 11:42:21.291: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.79587419s Jun 15 11:42:23.663: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.167734314s Jun 15 11:42:26.016: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.520707497s Jun 15 11:42:28.043: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.54804765s Jun 15 11:42:30.047: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.551366384s Jun 15 11:42:32.124: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.629302804s Jun 15 11:42:34.423: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.927730454s Jun 15 11:42:36.426: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.930603143s Jun 15 11:42:38.458: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.963194465s Jun 15 11:42:40.462: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.966866567s STEP: Saw pod success Jun 15 11:42:40.462: INFO: Pod "pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:42:40.465: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 15 11:42:40.536: INFO: Waiting for pod pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b to disappear Jun 15 11:42:40.541: INFO: Pod pod-projected-secrets-41ef7f7e-aefd-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:42:40.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kl55t" for this suite. Jun 15 11:42:46.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:42:46.564: INFO: namespace: e2e-tests-projected-kl55t, resource: bindings, ignored listing per whitelist Jun 15 11:42:46.640: INFO: namespace e2e-tests-projected-kl55t deletion completed in 6.09579088s • [SLOW TEST:32.692 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:42:46.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:42:46.808: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:42:50.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-x5dkr" for this suite. Jun 15 11:43:42.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:43:43.005: INFO: namespace: e2e-tests-pods-x5dkr, resource: bindings, ignored listing per whitelist Jun 15 11:43:43.011: INFO: namespace e2e-tests-pods-x5dkr deletion completed in 52.096036011s • [SLOW TEST:56.371 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:43:43.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:43:43.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-j8nhs" to be "success or failure" Jun 15 11:43:43.126: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.64229ms Jun 15 11:43:45.130: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030182509s Jun 15 11:43:47.482: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382460494s Jun 15 11:43:49.485: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.385257739s Jun 15 11:43:51.489: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.38930328s Jun 15 11:43:53.586: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.486242449s Jun 15 11:43:55.820: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.719957521s Jun 15 11:43:57.823: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.723132948s Jun 15 11:43:59.827: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.726773907s STEP: Saw pod success Jun 15 11:43:59.827: INFO: Pod "downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:43:59.830: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:44:00.454: INFO: Waiting for pod downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b to disappear Jun 15 11:44:00.477: INFO: Pod downwardapi-volume-76c74eb7-aefd-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:44:00.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j8nhs" for this suite. Jun 15 11:44:06.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:44:06.541: INFO: namespace: e2e-tests-downward-api-j8nhs, resource: bindings, ignored listing per whitelist Jun 15 11:44:06.568: INFO: namespace e2e-tests-downward-api-j8nhs deletion completed in 6.0875265s • [SLOW TEST:23.557 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:44:06.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-wjx5 STEP: Creating a pod to test atomic-volume-subpath Jun 15 11:44:06.682: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wjx5" in namespace "e2e-tests-subpath-wm6mv" to be "success or failure" Jun 15 11:44:06.687: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.116857ms Jun 15 11:44:08.691: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008883s Jun 15 11:44:10.712: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029439896s Jun 15 11:44:12.715: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032908904s Jun 15 11:44:14.719: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036377497s Jun 15 11:44:16.722: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=true. Elapsed: 10.04000043s Jun 15 11:44:18.725: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 12.043058007s Jun 15 11:44:20.729: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 14.046377754s Jun 15 11:44:22.732: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 16.049572567s Jun 15 11:44:24.735: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 18.052692115s Jun 15 11:44:26.739: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 20.056468403s Jun 15 11:44:28.742: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 22.06010017s Jun 15 11:44:30.746: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 24.063813669s Jun 15 11:44:32.749: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 26.067304823s Jun 15 11:44:34.754: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 28.071874807s Jun 15 11:44:36.808: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Running", Reason="", readiness=false. Elapsed: 30.125990528s Jun 15 11:44:38.820: INFO: Pod "pod-subpath-test-projected-wjx5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.138083204s STEP: Saw pod success Jun 15 11:44:38.820: INFO: Pod "pod-subpath-test-projected-wjx5" satisfied condition "success or failure" Jun 15 11:44:38.823: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-wjx5 container test-container-subpath-projected-wjx5: STEP: delete the pod Jun 15 11:44:38.862: INFO: Waiting for pod pod-subpath-test-projected-wjx5 to disappear Jun 15 11:44:38.867: INFO: Pod pod-subpath-test-projected-wjx5 no longer exists STEP: Deleting pod pod-subpath-test-projected-wjx5 Jun 15 11:44:38.867: INFO: Deleting pod "pod-subpath-test-projected-wjx5" in namespace "e2e-tests-subpath-wm6mv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:44:38.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-wm6mv" for this suite. Jun 15 11:44:44.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:44:44.970: INFO: namespace: e2e-tests-subpath-wm6mv, resource: bindings, ignored listing per whitelist Jun 15 11:44:44.985: INFO: namespace e2e-tests-subpath-wm6mv deletion completed in 6.113802041s • [SLOW TEST:38.418 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:44:44.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 11:44:45.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hqlcc' Jun 15 11:44:45.445: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 15 11:44:45.445: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jun 15 11:44:45.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-hqlcc' Jun 15 11:44:45.563: INFO: stderr: "" Jun 15 11:44:45.563: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:44:45.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hqlcc" for this suite. Jun 15 11:45:07.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:45:07.645: INFO: namespace: e2e-tests-kubectl-hqlcc, resource: bindings, ignored listing per whitelist Jun 15 11:45:07.673: INFO: namespace e2e-tests-kubectl-hqlcc deletion completed in 22.094775029s • [SLOW TEST:22.688 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:45:07.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jun 15 11:45:08.265: INFO: Waiting up to 5m0s for pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9" in namespace "e2e-tests-svcaccounts-k2kct" to be "success or failure" Jun 15 11:45:08.284: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.647814ms Jun 15 11:45:10.479: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214296368s Jun 15 11:45:12.498: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233399907s Jun 15 11:45:14.503: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9": Phase="Running", Reason="", readiness=false. Elapsed: 6.238692926s Jun 15 11:45:16.545: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.280705459s STEP: Saw pod success Jun 15 11:45:16.545: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9" satisfied condition "success or failure" Jun 15 11:45:16.548: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9 container token-test: STEP: delete the pod Jun 15 11:45:16.579: INFO: Waiting for pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9 to disappear Jun 15 11:45:16.593: INFO: Pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-xwrh9 no longer exists STEP: Creating a pod to test consume service account root CA Jun 15 11:45:16.597: INFO: Waiting up to 5m0s for pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn" in namespace "e2e-tests-svcaccounts-k2kct" to be "success or failure" Jun 15 11:45:16.600: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113526ms Jun 15 11:45:18.604: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007601022s Jun 15 11:45:21.972: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn": Phase="Pending", Reason="", readiness=false. Elapsed: 5.375200821s Jun 15 11:45:23.976: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn": Phase="Running", Reason="", readiness=false. Elapsed: 7.379890035s Jun 15 11:45:25.982: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.385038789s STEP: Saw pod success Jun 15 11:45:25.982: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn" satisfied condition "success or failure" Jun 15 11:45:25.985: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn container root-ca-test: STEP: delete the pod Jun 15 11:45:26.049: INFO: Waiting for pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn to disappear Jun 15 11:45:26.053: INFO: Pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-mstzn no longer exists STEP: Creating a pod to test consume service account namespace Jun 15 11:45:26.057: INFO: Waiting up to 5m0s for pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5" in namespace "e2e-tests-svcaccounts-k2kct" to be "success or failure" Jun 15 11:45:26.071: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.620987ms Jun 15 11:45:28.076: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019263263s Jun 15 11:45:30.205: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148311482s Jun 15 11:45:32.210: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5": Phase="Running", Reason="", readiness=false. Elapsed: 6.153149717s Jun 15 11:45:34.214: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157836528s STEP: Saw pod success Jun 15 11:45:34.214: INFO: Pod "pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5" satisfied condition "success or failure" Jun 15 11:45:34.218: INFO: Trying to get logs from node hunter-worker pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5 container namespace-test: STEP: delete the pod Jun 15 11:45:34.254: INFO: Waiting for pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5 to disappear Jun 15 11:45:34.259: INFO: Pod pod-service-account-a98a6f8e-aefd-11ea-99db-0242ac11001b-228x5 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:45:34.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-k2kct" for this suite. Jun 15 11:45:40.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:45:40.508: INFO: namespace: e2e-tests-svcaccounts-k2kct, resource: bindings, ignored listing per whitelist Jun 15 11:45:40.516: INFO: namespace e2e-tests-svcaccounts-k2kct deletion completed in 6.240166222s • [SLOW TEST:32.842 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:45:40.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 11:45:40.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xpj2z' Jun 15 11:45:40.780: INFO: stderr: "" Jun 15 11:45:40.780: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Jun 15 11:45:40.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xpj2z' Jun 15 11:45:51.774: INFO: stderr: "" Jun 15 11:45:51.774: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:45:51.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xpj2z" for this suite. Jun 15 11:45:57.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:45:57.817: INFO: namespace: e2e-tests-kubectl-xpj2z, resource: bindings, ignored listing per whitelist Jun 15 11:45:57.880: INFO: namespace e2e-tests-kubectl-xpj2z deletion completed in 6.102536602s • [SLOW TEST:17.363 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:45:57.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-lcmxq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lcmxq to expose endpoints map[] Jun 15 11:45:58.020: INFO: Get endpoints failed (16.237461ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 15 11:45:59.024: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lcmxq exposes endpoints map[] (1.020391265s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-lcmxq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lcmxq to expose endpoints map[pod1:[80]] Jun 15 11:46:03.106: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lcmxq exposes endpoints map[pod1:[80]] (4.075591213s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-lcmxq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lcmxq to expose endpoints map[pod1:[80] pod2:[80]] Jun 15 11:46:07.203: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lcmxq exposes endpoints map[pod1:[80] pod2:[80]] (4.092486889s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-lcmxq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lcmxq to expose endpoints map[pod2:[80]] Jun 15 11:46:08.228: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lcmxq exposes endpoints map[pod2:[80]] (1.020242399s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-lcmxq STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lcmxq to expose endpoints map[] Jun 15 11:46:09.275: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lcmxq exposes endpoints map[] (1.041692876s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:46:09.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-lcmxq" for this suite. Jun 15 11:46:15.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:46:15.407: INFO: namespace: e2e-tests-services-lcmxq, resource: bindings, ignored listing per whitelist Jun 15 11:46:15.412: INFO: namespace e2e-tests-services-lcmxq deletion completed in 6.088608353s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:17.532 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:46:15.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:46:15.513: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-qsvbl" to be "success or failure" Jun 15 11:46:15.517: INFO: Pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.761785ms Jun 15 11:46:17.660: INFO: Pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146766739s Jun 15 11:46:19.664: INFO: Pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151148812s Jun 15 11:46:21.978: INFO: Pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46467358s Jun 15 11:46:23.982: INFO: Pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.468575853s STEP: Saw pod success Jun 15 11:46:23.982: INFO: Pod "downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:46:23.985: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:46:23.997: INFO: Waiting for pod downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b to disappear Jun 15 11:46:24.035: INFO: Pod downwardapi-volume-d19f890a-aefd-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:46:24.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qsvbl" for this suite. Jun 15 11:46:30.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:46:30.070: INFO: namespace: e2e-tests-downward-api-qsvbl, resource: bindings, ignored listing per whitelist Jun 15 11:46:30.125: INFO: namespace e2e-tests-downward-api-qsvbl deletion completed in 6.08649101s • [SLOW TEST:14.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:46:30.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-ndcrz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ndcrz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.69.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.69.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.69.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.69.195_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-ndcrz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-ndcrz.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-ndcrz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-ndcrz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 195.69.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.69.195_udp@PTR;check="$$(dig +tcp +noall +answer +search 195.69.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.69.195_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 15 11:47:03.388: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.395: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.414: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.416: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.418: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.420: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.422: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.424: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.426: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:03.442: INFO: Lookups using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ndcrz jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc] Jun 15 11:47:08.461: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:08.469: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.513: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.516: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.518: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.520: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.523: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.525: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.528: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.530: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:09.546: INFO: Lookups using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ndcrz jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc] Jun 15 11:47:13.454: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.461: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.483: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.487: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.490: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.492: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.495: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.497: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.500: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.502: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:13.516: INFO: Lookups using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ndcrz jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc] Jun 15 11:47:18.457: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.464: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.490: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.493: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.497: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.500: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.503: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.506: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.508: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.511: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:18.532: INFO: Lookups using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ndcrz jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc] Jun 15 11:47:23.466: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.474: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.495: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.497: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.500: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.503: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.506: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.508: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.514: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:23.528: INFO: Lookups using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ndcrz jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc] Jun 15 11:47:28.456: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.488: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.491: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.493: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.496: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.498: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.500: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.503: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.505: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc from pod e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b: the server could not find the requested resource (get pods dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b) Jun 15 11:47:28.517: INFO: Lookups using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-ndcrz wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-ndcrz jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz jessie_udp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@dns-test-service.e2e-tests-dns-ndcrz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-ndcrz.svc] Jun 15 11:47:33.552: INFO: DNS probes using e2e-tests-dns-ndcrz/dns-test-da6b1a13-aefd-11ea-99db-0242ac11001b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:47:33.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-ndcrz" for this suite. Jun 15 11:47:41.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:47:41.876: INFO: namespace: e2e-tests-dns-ndcrz, resource: bindings, ignored listing per whitelist Jun 15 11:47:41.909: INFO: namespace e2e-tests-dns-ndcrz deletion completed in 8.113341415s • [SLOW TEST:71.784 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:47:41.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Jun 15 11:47:42.010: INFO: Waiting up to 5m0s for pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b" in namespace "e2e-tests-var-expansion-6vd4v" to be "success or failure" Jun 15 11:47:42.014: INFO: Pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234226ms Jun 15 11:47:44.018: INFO: Pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008739598s Jun 15 11:47:46.985: INFO: Pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975521787s Jun 15 11:47:49.239: INFO: Pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.228983148s Jun 15 11:47:51.242: INFO: Pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.231968406s STEP: Saw pod success Jun 15 11:47:51.242: INFO: Pod "var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:47:51.244: INFO: Trying to get logs from node hunter-worker pod var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 11:47:51.295: INFO: Waiting for pod var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b to disappear Jun 15 11:47:51.304: INFO: Pod var-expansion-052e1dfb-aefe-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:47:51.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-6vd4v" for this suite. Jun 15 11:48:01.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:48:01.380: INFO: namespace: e2e-tests-var-expansion-6vd4v, resource: bindings, ignored listing per whitelist Jun 15 11:48:01.394: INFO: namespace e2e-tests-var-expansion-6vd4v deletion completed in 10.086736142s • [SLOW TEST:19.485 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:48:01.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-12c261af-aefe-11ea-99db-0242ac11001b STEP: Creating secret with name s-test-opt-upd-12c26201-aefe-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-12c261af-aefe-11ea-99db-0242ac11001b STEP: Updating secret s-test-opt-upd-12c26201-aefe-11ea-99db-0242ac11001b STEP: Creating secret with name s-test-opt-create-12c2621a-aefe-11ea-99db-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:48:27.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mlq44" for this suite. Jun 15 11:48:49.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:48:49.954: INFO: namespace: e2e-tests-secrets-mlq44, resource: bindings, ignored listing per whitelist Jun 15 11:48:50.001: INFO: namespace e2e-tests-secrets-mlq44 deletion completed in 22.08874036s • [SLOW TEST:48.607 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:48:50.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Jun 15 11:48:54.180: INFO: Pod pod-hostip-2dca40cf-aefe-11ea-99db-0242ac11001b has hostIP: 172.17.0.3 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:48:54.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-f2gc8" for this suite. Jun 15 11:49:16.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:49:16.356: INFO: namespace: e2e-tests-pods-f2gc8, resource: bindings, ignored listing per whitelist Jun 15 11:49:16.358: INFO: namespace e2e-tests-pods-f2gc8 deletion completed in 22.175926582s • [SLOW TEST:26.357 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:49:16.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 15 11:49:16.631: INFO: Waiting up to 5m0s for pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-7wwz5" to be "success or failure" Jun 15 11:49:16.636: INFO: Pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.026782ms Jun 15 11:49:18.640: INFO: Pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009244195s Jun 15 11:49:20.643: INFO: Pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012297545s Jun 15 11:49:22.646: INFO: Pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.014822628s Jun 15 11:49:24.746: INFO: Pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115686566s STEP: Saw pod success Jun 15 11:49:24.747: INFO: Pod "pod-3d93e728-aefe-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:49:24.749: INFO: Trying to get logs from node hunter-worker2 pod pod-3d93e728-aefe-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 11:49:24.844: INFO: Waiting for pod pod-3d93e728-aefe-11ea-99db-0242ac11001b to disappear Jun 15 11:49:24.896: INFO: Pod pod-3d93e728-aefe-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:49:24.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7wwz5" for this suite. Jun 15 11:49:31.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:49:31.425: INFO: namespace: e2e-tests-emptydir-7wwz5, resource: bindings, ignored listing per whitelist Jun 15 11:49:31.455: INFO: namespace e2e-tests-emptydir-7wwz5 deletion completed in 6.556967482s • [SLOW TEST:15.097 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:49:31.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:49:32.006: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:49:38.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wj2ct" for this suite. Jun 15 11:50:42.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:50:42.124: INFO: namespace: e2e-tests-pods-wj2ct, resource: bindings, ignored listing per whitelist Jun 15 11:50:42.165: INFO: namespace e2e-tests-pods-wj2ct deletion completed in 1m4.07051362s • [SLOW TEST:70.709 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:50:42.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 15 11:50:42.264: INFO: namespace e2e-tests-kubectl-7fv6j Jun 15 11:50:42.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7fv6j' Jun 15 11:50:44.666: INFO: stderr: "" Jun 15 11:50:44.666: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 15 11:50:45.671: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:45.671: INFO: Found 0 / 1 Jun 15 11:50:46.724: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:46.724: INFO: Found 0 / 1 Jun 15 11:50:47.840: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:47.840: INFO: Found 0 / 1 Jun 15 11:50:48.718: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:48.718: INFO: Found 0 / 1 Jun 15 11:50:49.671: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:49.671: INFO: Found 0 / 1 Jun 15 11:50:51.108: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:51.108: INFO: Found 0 / 1 Jun 15 11:50:51.671: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:51.671: INFO: Found 0 / 1 Jun 15 11:50:52.672: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:52.672: INFO: Found 0 / 1 Jun 15 11:50:53.832: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:53.833: INFO: Found 0 / 1 Jun 15 11:50:54.671: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:54.671: INFO: Found 0 / 1 Jun 15 11:50:55.670: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:55.670: INFO: Found 0 / 1 Jun 15 11:50:56.669: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:56.669: INFO: Found 0 / 1 Jun 15 11:50:57.688: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:57.688: INFO: Found 0 / 1 Jun 15 11:50:59.317: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:59.317: INFO: Found 0 / 1 Jun 15 11:50:59.669: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:50:59.669: INFO: Found 0 / 1 Jun 15 11:51:00.670: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:51:00.670: INFO: Found 1 / 1 Jun 15 11:51:00.670: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 15 11:51:00.674: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:51:00.674: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 15 11:51:00.674: INFO: wait on redis-master startup in e2e-tests-kubectl-7fv6j Jun 15 11:51:00.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfn4m redis-master --namespace=e2e-tests-kubectl-7fv6j' Jun 15 11:51:00.779: INFO: stderr: "" Jun 15 11:51:00.779: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 15 Jun 11:50:57.971 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 15 Jun 11:50:57.977 # Server started, Redis version 3.2.12\n1:M 15 Jun 11:50:57.977 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 15 Jun 11:50:57.977 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 15 11:51:00.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-7fv6j' Jun 15 11:51:00.930: INFO: stderr: "" Jun 15 11:51:00.930: INFO: stdout: "service/rm2 exposed\n" Jun 15 11:51:00.948: INFO: Service rm2 in namespace e2e-tests-kubectl-7fv6j found. STEP: exposing service Jun 15 11:51:03.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-7fv6j' Jun 15 11:51:03.636: INFO: stderr: "" Jun 15 11:51:03.636: INFO: stdout: "service/rm3 exposed\n" Jun 15 11:51:03.748: INFO: Service rm3 in namespace e2e-tests-kubectl-7fv6j found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:51:05.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7fv6j" for this suite. Jun 15 11:51:52.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:51:52.718: INFO: namespace: e2e-tests-kubectl-7fv6j, resource: bindings, ignored listing per whitelist Jun 15 11:51:52.754: INFO: namespace e2e-tests-kubectl-7fv6j deletion completed in 46.995932473s • [SLOW TEST:70.589 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:51:52.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 15 11:51:52.846: INFO: Waiting up to 5m0s for pod "pod-9aae864c-aefe-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-kgq5q" to be "success or failure" Jun 15 11:51:52.856: INFO: Pod "pod-9aae864c-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583918ms Jun 15 11:51:54.859: INFO: Pod "pod-9aae864c-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013328904s Jun 15 11:51:56.862: INFO: Pod "pod-9aae864c-aefe-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016353271s STEP: Saw pod success Jun 15 11:51:56.862: INFO: Pod "pod-9aae864c-aefe-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:51:56.864: INFO: Trying to get logs from node hunter-worker pod pod-9aae864c-aefe-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 11:51:56.881: INFO: Waiting for pod pod-9aae864c-aefe-11ea-99db-0242ac11001b to disappear Jun 15 11:51:56.886: INFO: Pod pod-9aae864c-aefe-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:51:56.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kgq5q" for this suite. Jun 15 11:52:02.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:52:02.935: INFO: namespace: e2e-tests-emptydir-kgq5q, resource: bindings, ignored listing per whitelist Jun 15 11:52:02.964: INFO: namespace e2e-tests-emptydir-kgq5q deletion completed in 6.075906854s • [SLOW TEST:10.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:52:02.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:52:07.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8dldj" for this suite. Jun 15 11:52:57.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:52:57.371: INFO: namespace: e2e-tests-kubelet-test-8dldj, resource: bindings, ignored listing per whitelist Jun 15 11:52:57.422: INFO: namespace e2e-tests-kubelet-test-8dldj deletion completed in 50.278626239s • [SLOW TEST:54.458 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:52:57.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 11:52:57.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-tht7r" to be "success or failure" Jun 15 11:52:57.513: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.581879ms Jun 15 11:52:59.785: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295471187s Jun 15 11:53:02.324: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.833561375s Jun 15 11:53:04.328: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.837601266s Jun 15 11:53:06.332: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.841851168s Jun 15 11:53:08.803: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.31259445s Jun 15 11:53:11.210: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.719656013s Jun 15 11:53:13.213: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.723471201s Jun 15 11:53:16.795: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.305070728s Jun 15 11:53:18.799: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.308738085s Jun 15 11:53:21.701: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.21127666s Jun 15 11:53:24.576: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.085637392s Jun 15 11:53:26.580: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.089653674s Jun 15 11:53:28.972: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.482289642s Jun 15 11:53:33.669: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.178838343s Jun 15 11:53:35.673: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.18344028s Jun 15 11:53:37.677: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.187042119s Jun 15 11:53:39.772: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.281844343s Jun 15 11:53:41.901: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 44.411046334s Jun 15 11:53:43.905: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 46.415035522s STEP: Saw pod success Jun 15 11:53:43.905: INFO: Pod "downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:53:43.908: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 11:53:44.248: INFO: Waiting for pod downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b to disappear Jun 15 11:53:44.261: INFO: Pod downwardapi-volume-c1386aa8-aefe-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:53:44.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tht7r" for this suite. Jun 15 11:53:50.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:53:50.304: INFO: namespace: e2e-tests-projected-tht7r, resource: bindings, ignored listing per whitelist Jun 15 11:53:50.346: INFO: namespace e2e-tests-projected-tht7r deletion completed in 6.081115884s • [SLOW TEST:52.924 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:53:50.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:53:50.464: INFO: Creating deployment "test-recreate-deployment" Jun 15 11:53:50.470: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 15 11:53:50.500: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jun 15 11:53:52.775: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 15 11:53:52.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727818830, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727818830, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727818830, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727818830, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 11:53:54.798: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 15 11:53:54.804: INFO: Updating deployment test-recreate-deployment Jun 15 11:53:54.804: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 15 11:53:55.554: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-fkmnn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fkmnn/deployments/test-recreate-deployment,UID:e0ccc634-aefe-11ea-99e8-0242ac110002,ResourceVersion:16075933,Generation:2,CreationTimestamp:2020-06-15 11:53:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-15 11:53:54 +0000 UTC 2020-06-15 11:53:54 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-15 11:53:55 +0000 UTC 2020-06-15 11:53:50 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 15 11:53:55.556: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-fkmnn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fkmnn/replicasets/test-recreate-deployment-589c4bfd,UID:e375b4a9-aefe-11ea-99e8-0242ac110002,ResourceVersion:16075932,Generation:1,CreationTimestamp:2020-06-15 11:53:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e0ccc634-aefe-11ea-99e8-0242ac110002 0xc0020f26df 0xc0020f26f0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 15 11:53:55.556: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 15 11:53:55.557: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-fkmnn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fkmnn/replicasets/test-recreate-deployment-5bf7f65dc,UID:e0d23eb0-aefe-11ea-99e8-0242ac110002,ResourceVersion:16075922,Generation:2,CreationTimestamp:2020-06-15 11:53:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e0ccc634-aefe-11ea-99e8-0242ac110002 0xc0020f27b0 0xc0020f27b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 15 11:53:55.603: INFO: Pod "test-recreate-deployment-589c4bfd-b2rm6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-b2rm6,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-fkmnn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fkmnn/pods/test-recreate-deployment-589c4bfd-b2rm6,UID:e37718ac-aefe-11ea-99e8-0242ac110002,ResourceVersion:16075934,Generation:0,CreationTimestamp:2020-06-15 11:53:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd e375b4a9-aefe-11ea-99e8-0242ac110002 0xc001a5f1af 0xc001a5f210}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-7pz6w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7pz6w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7pz6w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a5f5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a5f660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:53:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:53:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:53:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 11:53:54 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-15 11:53:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:53:55.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-fkmnn" for this suite. Jun 15 11:54:03.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:54:03.802: INFO: namespace: e2e-tests-deployment-fkmnn, resource: bindings, ignored listing per whitelist Jun 15 11:54:03.886: INFO: namespace e2e-tests-deployment-fkmnn deletion completed in 8.183071877s • [SLOW TEST:13.539 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:54:03.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:54:04.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Jun 15 11:54:04.697: INFO: stderr: "" Jun 15 11:54:04.697: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:07:46Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 15 11:54:04.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ss85r' Jun 15 11:54:04.979: INFO: stderr: "" Jun 15 11:54:04.979: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 15 11:54:04.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ss85r' Jun 15 11:54:05.330: INFO: stderr: "" Jun 15 11:54:05.330: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 15 11:54:06.395: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:06.395: INFO: Found 0 / 1 Jun 15 11:54:07.334: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:07.334: INFO: Found 0 / 1 Jun 15 11:54:09.117: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:09.117: INFO: Found 0 / 1 Jun 15 11:54:09.608: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:09.608: INFO: Found 0 / 1 Jun 15 11:54:10.334: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:10.334: INFO: Found 0 / 1 Jun 15 11:54:11.476: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:11.476: INFO: Found 0 / 1 Jun 15 11:54:12.335: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:12.335: INFO: Found 0 / 1 Jun 15 11:54:13.764: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:13.764: INFO: Found 0 / 1 Jun 15 11:54:14.626: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:14.626: INFO: Found 0 / 1 Jun 15 11:54:15.512: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:15.512: INFO: Found 0 / 1 Jun 15 11:54:16.865: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:16.865: INFO: Found 0 / 1 Jun 15 11:54:17.532: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:17.532: INFO: Found 1 / 1 Jun 15 11:54:17.532: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 15 11:54:17.535: INFO: Selector matched 1 pods for map[app:redis] Jun 15 11:54:17.535: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 15 11:54:17.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-p55qg --namespace=e2e-tests-kubectl-ss85r' Jun 15 11:54:17.792: INFO: stderr: "" Jun 15 11:54:17.792: INFO: stdout: "Name: redis-master-p55qg\nNamespace: e2e-tests-kubectl-ss85r\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Mon, 15 Jun 2020 11:54:05 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.7\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://2c25b6c60ad0e88ec5d3ac8f007de5ff3d9d5ae0ba8da156dc041539bfe395b4\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 15 Jun 2020 11:54:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wm742 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-wm742:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wm742\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 12s default-scheduler Successfully assigned e2e-tests-kubectl-ss85r/redis-master-p55qg to hunter-worker2\n Normal Pulled 11s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 2s kubelet, hunter-worker2 Started container\n" Jun 15 11:54:17.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-ss85r' Jun 15 11:54:17.914: INFO: stderr: "" Jun 15 11:54:17.914: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ss85r\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 13s replication-controller Created pod: redis-master-p55qg\n" Jun 15 11:54:17.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-ss85r' Jun 15 11:54:18.047: INFO: stderr: "" Jun 15 11:54:18.047: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ss85r\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.104.64.168\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.7:6379\nSession Affinity: None\nEvents: \n" Jun 15 11:54:18.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Jun 15 11:54:18.286: INFO: stderr: "" Jun 15 11:54:18.286: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 15 Jun 2020 11:54:13 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 15 Jun 2020 11:54:13 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 15 Jun 2020 11:54:13 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 15 Jun 2020 11:54:13 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 91d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 91d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 91d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 91d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 91d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 91d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 91d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 15 11:54:18.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-ss85r' Jun 15 11:54:18.389: INFO: stderr: "" Jun 15 11:54:18.389: INFO: stdout: "Name: e2e-tests-kubectl-ss85r\nLabels: e2e-framework=kubectl\n e2e-run=86b83ddd-aef5-11ea-99db-0242ac11001b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:54:18.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ss85r" for this suite. Jun 15 11:54:38.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:54:38.499: INFO: namespace: e2e-tests-kubectl-ss85r, resource: bindings, ignored listing per whitelist Jun 15 11:54:38.509: INFO: namespace e2e-tests-kubectl-ss85r deletion completed in 20.116699099s • [SLOW TEST:34.623 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:54:38.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jun 15 11:54:38.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:54:38.805: INFO: stderr: "" Jun 15 11:54:38.805: INFO: stdout: "pod/pause created\n" Jun 15 11:54:38.805: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 15 11:54:38.805: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-pznwm" to be "running and ready" Jun 15 11:54:38.825: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.321964ms Jun 15 11:54:40.937: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131615162s Jun 15 11:54:43.257: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452088196s Jun 15 11:54:45.260: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455163624s Jun 15 11:54:47.264: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.459260894s Jun 15 11:54:49.268: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.462377853s Jun 15 11:54:51.271: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.46559004s Jun 15 11:54:53.425: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.619901077s Jun 15 11:54:55.788: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.982546994s Jun 15 11:54:58.423: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.617375038s Jun 15 11:55:00.427: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 21.621339682s Jun 15 11:55:02.430: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.624572425s Jun 15 11:55:04.928: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.122742984s Jun 15 11:55:07.120: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 28.314601292s Jun 15 11:55:07.120: INFO: Pod "pause" satisfied condition "running and ready" Jun 15 11:55:07.120: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jun 15 11:55:07.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:55:08.355: INFO: stderr: "" Jun 15 11:55:08.355: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 15 11:55:08.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:55:08.448: INFO: stderr: "" Jun 15 11:55:08.448: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 30s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 15 11:55:08.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:55:08.559: INFO: stderr: "" Jun 15 11:55:08.559: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 15 11:55:08.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:55:08.822: INFO: stderr: "" Jun 15 11:55:08.822: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 30s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jun 15 11:55:08.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:55:10.399: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 11:55:10.399: INFO: stdout: "pod \"pause\" force deleted\n" Jun 15 11:55:10.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-pznwm' Jun 15 11:55:10.687: INFO: stderr: "No resources found.\n" Jun 15 11:55:10.687: INFO: stdout: "" Jun 15 11:55:10.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-pznwm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 15 11:55:10.781: INFO: stderr: "" Jun 15 11:55:10.781: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:55:10.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pznwm" for this suite. Jun 15 11:55:20.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:55:21.078: INFO: namespace: e2e-tests-kubectl-pznwm, resource: bindings, ignored listing per whitelist Jun 15 11:55:21.121: INFO: namespace e2e-tests-kubectl-pznwm deletion completed in 10.336265147s • [SLOW TEST:42.612 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:55:21.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 15 11:55:28.697: INFO: Successfully updated pod "pod-update-activedeadlineseconds-174165d4-aeff-11ea-99db-0242ac11001b" Jun 15 11:55:28.697: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-174165d4-aeff-11ea-99db-0242ac11001b" in namespace "e2e-tests-pods-5db7d" to be "terminated due to deadline exceeded" Jun 15 11:55:28.728: INFO: Pod "pod-update-activedeadlineseconds-174165d4-aeff-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 31.600235ms Jun 15 11:55:30.732: INFO: Pod "pod-update-activedeadlineseconds-174165d4-aeff-11ea-99db-0242ac11001b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.034825228s Jun 15 11:55:30.732: INFO: Pod "pod-update-activedeadlineseconds-174165d4-aeff-11ea-99db-0242ac11001b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:55:30.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-5db7d" for this suite. Jun 15 11:55:37.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:55:37.423: INFO: namespace: e2e-tests-pods-5db7d, resource: bindings, ignored listing per whitelist Jun 15 11:55:37.476: INFO: namespace e2e-tests-pods-5db7d deletion completed in 6.740773181s • [SLOW TEST:16.355 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:55:37.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 11:55:37.606: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.064933ms) Jun 15 11:55:37.608: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.795751ms) Jun 15 11:55:37.611: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.422556ms) Jun 15 11:55:37.613: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.38023ms) Jun 15 11:55:37.616: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.781193ms) Jun 15 11:55:37.619: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.184973ms) Jun 15 11:55:37.622: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.706848ms) Jun 15 11:55:37.624: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.067087ms) Jun 15 11:55:37.627: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.548783ms) Jun 15 11:55:37.629: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.305723ms) Jun 15 11:55:37.632: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.79178ms) Jun 15 11:55:37.634: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.548441ms) Jun 15 11:55:37.637: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.37359ms) Jun 15 11:55:37.639: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.194935ms) Jun 15 11:55:37.641: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.303139ms) Jun 15 11:55:37.643: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.983124ms) Jun 15 11:55:37.645: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.804767ms) Jun 15 11:55:37.647: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.64069ms) Jun 15 11:55:37.680: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 33.489449ms) Jun 15 11:55:37.684: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.3556ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:55:37.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-ldfnn" for this suite. Jun 15 11:55:44.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:55:45.158: INFO: namespace: e2e-tests-proxy-ldfnn, resource: bindings, ignored listing per whitelist Jun 15 11:55:45.201: INFO: namespace e2e-tests-proxy-ldfnn deletion completed in 7.511471893s • [SLOW TEST:7.725 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:55:45.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-253d7da4-aeff-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 11:55:45.315: INFO: Waiting up to 5m0s for pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-l55nn" to be "success or failure" Jun 15 11:55:45.348: INFO: Pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.900543ms Jun 15 11:55:47.574: INFO: Pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258336102s Jun 15 11:55:49.576: INFO: Pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260903938s Jun 15 11:55:51.873: INFO: Pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.557359653s Jun 15 11:55:53.875: INFO: Pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.559989716s STEP: Saw pod success Jun 15 11:55:53.875: INFO: Pod "pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:55:53.877: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b container secret-env-test: STEP: delete the pod Jun 15 11:55:53.982: INFO: Waiting for pod pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b to disappear Jun 15 11:55:54.031: INFO: Pod pod-secrets-253e00c2-aeff-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:55:54.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l55nn" for this suite. Jun 15 11:56:00.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:56:00.090: INFO: namespace: e2e-tests-secrets-l55nn, resource: bindings, ignored listing per whitelist Jun 15 11:56:00.136: INFO: namespace e2e-tests-secrets-l55nn deletion completed in 6.100289594s • [SLOW TEST:14.935 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:56:00.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 15 11:56:00.248: INFO: Waiting up to 5m0s for pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-n798g" to be "success or failure" Jun 15 11:56:00.252: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260332ms Jun 15 11:56:02.279: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031410206s Jun 15 11:56:04.388: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140313274s Jun 15 11:56:06.472: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224436351s Jun 15 11:56:08.476: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 8.227974838s Jun 15 11:56:10.653: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.40567651s STEP: Saw pod success Jun 15 11:56:10.653: INFO: Pod "pod-2e26bf7d-aeff-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:56:10.703: INFO: Trying to get logs from node hunter-worker2 pod pod-2e26bf7d-aeff-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 11:56:10.850: INFO: Waiting for pod pod-2e26bf7d-aeff-11ea-99db-0242ac11001b to disappear Jun 15 11:56:11.104: INFO: Pod pod-2e26bf7d-aeff-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:56:11.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n798g" for this suite. Jun 15 11:56:17.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:56:17.603: INFO: namespace: e2e-tests-emptydir-n798g, resource: bindings, ignored listing per whitelist Jun 15 11:56:17.681: INFO: namespace e2e-tests-emptydir-n798g deletion completed in 6.57447915s • [SLOW TEST:17.546 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:56:17.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jun 15 11:56:17.738: INFO: Waiting up to 5m0s for pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b" in namespace "e2e-tests-containers-qbdtm" to be "success or failure" Jun 15 11:56:17.794: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 55.967448ms Jun 15 11:56:19.798: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059400572s Jun 15 11:56:22.053: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314634431s Jun 15 11:56:24.816: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.077236445s Jun 15 11:56:27.097: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.358640543s Jun 15 11:56:29.191: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.452208729s Jun 15 11:56:31.194: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.455718669s Jun 15 11:56:33.199: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.460108371s Jun 15 11:56:35.201: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.462892538s Jun 15 11:56:37.205: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.466443211s Jun 15 11:56:39.280: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.541490737s Jun 15 11:56:41.298: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.559269505s Jun 15 11:56:43.301: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.562832731s STEP: Saw pod success Jun 15 11:56:43.301: INFO: Pod "client-containers-3893c57e-aeff-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 11:56:43.307: INFO: Trying to get logs from node hunter-worker pod client-containers-3893c57e-aeff-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 11:56:43.340: INFO: Waiting for pod client-containers-3893c57e-aeff-11ea-99db-0242ac11001b to disappear Jun 15 11:56:43.379: INFO: Pod client-containers-3893c57e-aeff-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:56:43.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-qbdtm" for this suite. Jun 15 11:56:51.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:56:51.573: INFO: namespace: e2e-tests-containers-qbdtm, resource: bindings, ignored listing per whitelist Jun 15 11:56:51.614: INFO: namespace e2e-tests-containers-qbdtm deletion completed in 8.232567632s • [SLOW TEST:33.933 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:56:51.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 15 11:57:00.114: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:00.136: INFO: Pod pod-with-prestop-http-hook still exists Jun 15 11:57:02.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:02.140: INFO: Pod pod-with-prestop-http-hook still exists Jun 15 11:57:04.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:04.139: INFO: Pod pod-with-prestop-http-hook still exists Jun 15 11:57:06.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:06.138: INFO: Pod pod-with-prestop-http-hook still exists Jun 15 11:57:08.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:08.139: INFO: Pod pod-with-prestop-http-hook still exists Jun 15 11:57:10.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:10.140: INFO: Pod pod-with-prestop-http-hook still exists Jun 15 11:57:12.136: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 15 11:57:12.139: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:57:12.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-b5cfn" for this suite. Jun 15 11:57:34.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:57:34.170: INFO: namespace: e2e-tests-container-lifecycle-hook-b5cfn, resource: bindings, ignored listing per whitelist Jun 15 11:57:34.228: INFO: namespace e2e-tests-container-lifecycle-hook-b5cfn deletion completed in 22.080141224s • [SLOW TEST:42.613 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:57:34.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 15 11:57:34.335: INFO: PodSpec: initContainers in spec.initContainers Jun 15 11:58:51.863: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-663c6b14-aeff-11ea-99db-0242ac11001b", GenerateName:"", Namespace:"e2e-tests-init-container-snlbm", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-snlbm/pods/pod-init-663c6b14-aeff-11ea-99db-0242ac11001b", UID:"663cee8f-aeff-11ea-99e8-0242ac110002", ResourceVersion:"16076748", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63727819054, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"335060978"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lmswr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000fec300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lmswr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lmswr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lmswr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00117a068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001828120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00117a0f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00117a110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00117a118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00117a11c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727819054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727819054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727819054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727819054, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.13", StartTime:(*v1.Time)(0xc0019201a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023feaf0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023feb60)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://619fa5eb659787420ee3e5b66cf12faebbc2cb7a6c04eaa3bbe8276c8856e32e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0019201e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0019201c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:58:51.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-snlbm" for this suite. Jun 15 11:59:13.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:59:14.008: INFO: namespace: e2e-tests-init-container-snlbm, resource: bindings, ignored listing per whitelist Jun 15 11:59:14.019: INFO: namespace e2e-tests-init-container-snlbm deletion completed in 22.083601041s • [SLOW TEST:99.790 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:59:14.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 15 11:59:14.139: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5qtgf,SelfLink:/api/v1/namespaces/e2e-tests-watch-5qtgf/configmaps/e2e-watch-test-resource-version,UID:a1b31029-aeff-11ea-99e8-0242ac110002,ResourceVersion:16076811,Generation:0,CreationTimestamp:2020-06-15 11:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 15 11:59:14.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-5qtgf,SelfLink:/api/v1/namespaces/e2e-tests-watch-5qtgf/configmaps/e2e-watch-test-resource-version,UID:a1b31029-aeff-11ea-99e8-0242ac110002,ResourceVersion:16076812,Generation:0,CreationTimestamp:2020-06-15 11:59:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 11:59:14.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-5qtgf" for this suite. Jun 15 11:59:20.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 11:59:20.232: INFO: namespace: e2e-tests-watch-5qtgf, resource: bindings, ignored listing per whitelist Jun 15 11:59:20.235: INFO: namespace e2e-tests-watch-5qtgf deletion completed in 6.06535487s • [SLOW TEST:6.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 11:59:20.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-p5xz STEP: Creating a pod to test atomic-volume-subpath Jun 15 11:59:20.371: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-p5xz" in namespace "e2e-tests-subpath-jc88x" to be "success or failure" Jun 15 11:59:20.402: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.991068ms Jun 15 11:59:22.405: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033733156s Jun 15 11:59:25.313: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.94162173s Jun 15 11:59:27.974: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.602134214s Jun 15 11:59:29.977: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.605844848s Jun 15 11:59:32.182: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.810935686s Jun 15 11:59:34.187: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.815383194s Jun 15 11:59:37.643: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.271212717s Jun 15 11:59:41.178: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.80706862s Jun 15 11:59:43.183: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.811387204s Jun 15 11:59:48.760: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 28.388982998s Jun 15 11:59:50.765: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 30.393502833s Jun 15 11:59:52.770: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 32.398207226s Jun 15 11:59:55.236: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 34.864254761s Jun 15 11:59:57.271: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 36.899812947s Jun 15 12:00:00.673: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 40.302106297s Jun 15 12:00:02.936: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 42.565010794s Jun 15 12:00:05.042: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 44.670365947s Jun 15 12:00:07.045: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 46.673641878s Jun 15 12:00:09.090: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 48.71849657s Jun 15 12:00:11.476: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 51.104731289s Jun 15 12:00:13.830: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Pending", Reason="", readiness=false. Elapsed: 53.458413845s Jun 15 12:00:15.834: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 55.462459281s Jun 15 12:00:17.838: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 57.466936991s Jun 15 12:00:19.845: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 59.473153133s Jun 15 12:00:21.848: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m1.477045612s Jun 15 12:00:23.852: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m3.480793315s Jun 15 12:00:25.856: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m5.484306618s Jun 15 12:00:30.026: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m9.654687867s Jun 15 12:00:33.345: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.973183459s Jun 15 12:00:37.296: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.924649927s Jun 15 12:00:39.300: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.92847407s Jun 15 12:00:41.315: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.943567042s Jun 15 12:00:43.506: INFO: Pod "pod-subpath-test-downwardapi-p5xz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m23.134812252s STEP: Saw pod success Jun 15 12:00:43.506: INFO: Pod "pod-subpath-test-downwardapi-p5xz" satisfied condition "success or failure" Jun 15 12:00:43.510: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-p5xz container test-container-subpath-downwardapi-p5xz: STEP: delete the pod Jun 15 12:00:45.235: INFO: Waiting for pod pod-subpath-test-downwardapi-p5xz to disappear Jun 15 12:00:45.350: INFO: Pod pod-subpath-test-downwardapi-p5xz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-p5xz Jun 15 12:00:45.350: INFO: Deleting pod "pod-subpath-test-downwardapi-p5xz" in namespace "e2e-tests-subpath-jc88x" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:00:45.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-jc88x" for this suite. Jun 15 12:00:57.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:00:57.437: INFO: namespace: e2e-tests-subpath-jc88x, resource: bindings, ignored listing per whitelist Jun 15 12:00:57.473: INFO: namespace e2e-tests-subpath-jc88x deletion completed in 12.117012318s • [SLOW TEST:97.238 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:00:57.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:00:58.123: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 15 12:00:58.371: INFO: Number of nodes with available pods: 0 Jun 15 12:00:58.371: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 15 12:00:58.767: INFO: Number of nodes with available pods: 0 Jun 15 12:00:58.767: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:00:59.841: INFO: Number of nodes with available pods: 0 Jun 15 12:00:59.841: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:00.771: INFO: Number of nodes with available pods: 0 Jun 15 12:01:00.771: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:01.848: INFO: Number of nodes with available pods: 0 Jun 15 12:01:01.848: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:02.771: INFO: Number of nodes with available pods: 0 Jun 15 12:01:02.771: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:03.823: INFO: Number of nodes with available pods: 0 Jun 15 12:01:03.823: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:04.771: INFO: Number of nodes with available pods: 0 Jun 15 12:01:04.771: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:05.794: INFO: Number of nodes with available pods: 1 Jun 15 12:01:05.794: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 15 12:01:06.273: INFO: Number of nodes with available pods: 1 Jun 15 12:01:06.273: INFO: Number of running nodes: 0, number of available pods: 1 Jun 15 12:01:07.277: INFO: Number of nodes with available pods: 0 Jun 15 12:01:07.277: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 15 12:01:07.429: INFO: Number of nodes with available pods: 0 Jun 15 12:01:07.429: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:08.709: INFO: Number of nodes with available pods: 0 Jun 15 12:01:08.709: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:09.433: INFO: Number of nodes with available pods: 0 Jun 15 12:01:09.433: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:10.543: INFO: Number of nodes with available pods: 0 Jun 15 12:01:10.543: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:11.807: INFO: Number of nodes with available pods: 0 Jun 15 12:01:11.807: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:12.524: INFO: Number of nodes with available pods: 0 Jun 15 12:01:12.524: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:13.432: INFO: Number of nodes with available pods: 0 Jun 15 12:01:13.433: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:14.666: INFO: Number of nodes with available pods: 0 Jun 15 12:01:14.666: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:15.433: INFO: Number of nodes with available pods: 0 Jun 15 12:01:15.433: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:01:16.495: INFO: Number of nodes with available pods: 1 Jun 15 12:01:16.495: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xtksv, will wait for the garbage collector to delete the pods Jun 15 12:01:16.701: INFO: Deleting DaemonSet.extensions daemon-set took: 76.316138ms Jun 15 12:01:16.902: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.283594ms Jun 15 12:01:31.404: INFO: Number of nodes with available pods: 0 Jun 15 12:01:31.404: INFO: Number of running nodes: 0, number of available pods: 0 Jun 15 12:01:31.406: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xtksv/daemonsets","resourceVersion":"16077132"},"items":null} Jun 15 12:01:31.572: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xtksv/pods","resourceVersion":"16077132"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:01:31.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-xtksv" for this suite. Jun 15 12:01:38.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:01:38.316: INFO: namespace: e2e-tests-daemonsets-xtksv, resource: bindings, ignored listing per whitelist Jun 15 12:01:38.364: INFO: namespace e2e-tests-daemonsets-xtksv deletion completed in 6.501063608s • [SLOW TEST:40.891 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:01:38.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:01:38.783: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-q6zsn" to be "success or failure" Jun 15 12:01:38.860: INFO: Pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 77.533455ms Jun 15 12:01:40.864: INFO: Pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080877138s Jun 15 12:01:42.867: INFO: Pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084138401s Jun 15 12:01:44.870: INFO: Pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.087635783s Jun 15 12:01:46.873: INFO: Pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090721707s STEP: Saw pod success Jun 15 12:01:46.873: INFO: Pod "downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:01:46.875: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:01:46.968: INFO: Waiting for pod downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b to disappear Jun 15 12:01:46.983: INFO: Pod downwardapi-volume-f7ef772f-aeff-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:01:46.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q6zsn" for this suite. Jun 15 12:01:52.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:01:53.101: INFO: namespace: e2e-tests-downward-api-q6zsn, resource: bindings, ignored listing per whitelist Jun 15 12:01:53.118: INFO: namespace e2e-tests-downward-api-q6zsn deletion completed in 6.13170823s • [SLOW TEST:14.754 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:01:53.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-0098b41f-af00-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:01:54.670: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-xt6r7" to be "success or failure" Jun 15 12:01:54.896: INFO: Pod "pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 226.456795ms Jun 15 12:01:56.900: INFO: Pod "pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230566655s Jun 15 12:01:58.944: INFO: Pod "pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274424672s Jun 15 12:02:00.948: INFO: Pod "pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.277904183s STEP: Saw pod success Jun 15 12:02:00.948: INFO: Pod "pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:02:00.950: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 12:02:00.988: INFO: Waiting for pod pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:02:01.039: INFO: Pod pod-projected-configmaps-00996630-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:02:01.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xt6r7" for this suite. Jun 15 12:02:07.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:02:07.131: INFO: namespace: e2e-tests-projected-xt6r7, resource: bindings, ignored listing per whitelist Jun 15 12:02:07.152: INFO: namespace e2e-tests-projected-xt6r7 deletion completed in 6.110247687s • [SLOW TEST:14.033 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:02:07.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-08ebd7a2-af00-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:02:07.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-v4d4k" to be "success or failure" Jun 15 12:02:07.344: INFO: Pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 45.109135ms Jun 15 12:02:10.161: INFO: Pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.861945706s Jun 15 12:02:12.276: INFO: Pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.976509057s Jun 15 12:02:14.280: INFO: Pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.980622809s Jun 15 12:02:16.312: INFO: Pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.012260924s STEP: Saw pod success Jun 15 12:02:16.312: INFO: Pod "pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:02:16.315: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 12:02:16.572: INFO: Waiting for pod pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:02:16.583: INFO: Pod pod-configmaps-08ee526e-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:02:16.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-v4d4k" for this suite. Jun 15 12:02:22.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:02:22.635: INFO: namespace: e2e-tests-configmap-v4d4k, resource: bindings, ignored listing per whitelist Jun 15 12:02:22.671: INFO: namespace e2e-tests-configmap-v4d4k deletion completed in 6.084545512s • [SLOW TEST:15.519 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:02:22.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 15 12:02:22.822: INFO: Waiting up to 5m0s for pod "pod-12272541-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-wpbcr" to be "success or failure" Jun 15 12:02:22.847: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.770556ms Jun 15 12:02:24.851: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028512659s Jun 15 12:02:27.190: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368131501s Jun 15 12:02:29.195: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372512102s Jun 15 12:02:31.359: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536825179s Jun 15 12:02:33.428: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.606078755s Jun 15 12:02:35.603: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.780483704s Jun 15 12:02:37.633: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.810772727s STEP: Saw pod success Jun 15 12:02:37.633: INFO: Pod "pod-12272541-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:02:37.636: INFO: Trying to get logs from node hunter-worker pod pod-12272541-af00-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:02:37.685: INFO: Waiting for pod pod-12272541-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:02:37.927: INFO: Pod pod-12272541-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:02:37.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wpbcr" for this suite. Jun 15 12:02:44.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:02:44.819: INFO: namespace: e2e-tests-emptydir-wpbcr, resource: bindings, ignored listing per whitelist Jun 15 12:02:44.844: INFO: namespace e2e-tests-emptydir-wpbcr deletion completed in 6.913486153s • [SLOW TEST:22.173 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:02:44.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-rgg2j STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-rgg2j to expose endpoints map[] Jun 15 12:02:44.996: INFO: Get endpoints failed (6.52002ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 15 12:02:46.003: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-rgg2j exposes endpoints map[] (1.013008343s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-rgg2j STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-rgg2j to expose endpoints map[pod1:[100]] Jun 15 12:02:51.634: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.625261617s elapsed, will retry) Jun 15 12:03:06.427: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (20.418629069s elapsed, will retry) Jun 15 12:03:12.906: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (26.897594058s elapsed, will retry) Jun 15 12:03:18.492: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-rgg2j exposes endpoints map[pod1:[100]] (32.483438139s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-rgg2j STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-rgg2j to expose endpoints map[pod1:[100] pod2:[101]] Jun 15 12:03:24.158: INFO: Unexpected endpoints: found map[2001b528-af00-11ea-99e8-0242ac110002:[100]], expected map[pod1:[100] pod2:[101]] (5.660588894s elapsed, will retry) Jun 15 12:03:29.483: INFO: Unexpected endpoints: found map[2001b528-af00-11ea-99e8-0242ac110002:[100]], expected map[pod1:[100] pod2:[101]] (10.98554604s elapsed, will retry) Jun 15 12:03:31.780: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-rgg2j exposes endpoints map[pod1:[100] pod2:[101]] (13.282212404s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-rgg2j STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-rgg2j to expose endpoints map[pod2:[101]] Jun 15 12:03:33.062: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-rgg2j exposes endpoints map[pod2:[101]] (1.266357998s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-rgg2j STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-rgg2j to expose endpoints map[] Jun 15 12:03:34.647: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-rgg2j exposes endpoints map[] (1.57991271s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:03:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-rgg2j" for this suite. Jun 15 12:03:57.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:03:57.996: INFO: namespace: e2e-tests-services-rgg2j, resource: bindings, ignored listing per whitelist Jun 15 12:03:58.029: INFO: namespace e2e-tests-services-rgg2j deletion completed in 22.557181993s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:73.184 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:03:58.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 15 12:03:58.122: INFO: Waiting up to 5m0s for pod "pod-4afbd823-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-mbnh8" to be "success or failure" Jun 15 12:03:58.127: INFO: Pod "pod-4afbd823-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.527111ms Jun 15 12:04:00.130: INFO: Pod "pod-4afbd823-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007315148s Jun 15 12:04:02.133: INFO: Pod "pod-4afbd823-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010842736s STEP: Saw pod success Jun 15 12:04:02.133: INFO: Pod "pod-4afbd823-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:04:02.136: INFO: Trying to get logs from node hunter-worker pod pod-4afbd823-af00-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:04:02.176: INFO: Waiting for pod pod-4afbd823-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:04:02.195: INFO: Pod pod-4afbd823-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:04:02.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mbnh8" for this suite. Jun 15 12:04:08.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:04:08.221: INFO: namespace: e2e-tests-emptydir-mbnh8, resource: bindings, ignored listing per whitelist Jun 15 12:04:08.263: INFO: namespace e2e-tests-emptydir-mbnh8 deletion completed in 6.065309877s • [SLOW TEST:10.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:04:08.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-51f3e072-af00-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-51f3e072-af00-11ea-99db-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:04:16.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rngz5" for this suite. Jun 15 12:04:39.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:04:39.049: INFO: namespace: e2e-tests-configmap-rngz5, resource: bindings, ignored listing per whitelist Jun 15 12:04:39.097: INFO: namespace e2e-tests-configmap-rngz5 deletion completed in 22.147654786s • [SLOW TEST:30.834 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:04:39.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 15 12:04:39.219: INFO: Waiting up to 5m0s for pod "pod-63799394-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-7drkp" to be "success or failure" Jun 15 12:04:39.226: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.11046ms Jun 15 12:04:41.229: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010258092s Jun 15 12:04:43.833: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613833965s Jun 15 12:04:45.837: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618091057s Jun 15 12:04:47.840: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.621001677s Jun 15 12:04:49.845: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.625942981s Jun 15 12:04:51.848: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.629262933s Jun 15 12:04:53.916: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 14.697351568s Jun 15 12:04:56.236: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 17.016584245s Jun 15 12:04:58.239: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.020216321s STEP: Saw pod success Jun 15 12:04:58.239: INFO: Pod "pod-63799394-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:04:58.242: INFO: Trying to get logs from node hunter-worker2 pod pod-63799394-af00-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:04:58.716: INFO: Waiting for pod pod-63799394-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:04:58.775: INFO: Pod pod-63799394-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:04:58.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7drkp" for this suite. Jun 15 12:05:06.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:05:06.988: INFO: namespace: e2e-tests-emptydir-7drkp, resource: bindings, ignored listing per whitelist Jun 15 12:05:06.997: INFO: namespace e2e-tests-emptydir-7drkp deletion completed in 8.217673882s • [SLOW TEST:27.900 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:05:06.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Jun 15 12:05:27.436: INFO: 5 pods remaining Jun 15 12:05:27.436: INFO: 5 pods has nil DeletionTimestamp Jun 15 12:05:27.436: INFO: STEP: Gathering metrics W0615 12:05:36.728089 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 15 12:05:36.728: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:05:36.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-prj9m" for this suite. Jun 15 12:06:00.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:06:00.702: INFO: namespace: e2e-tests-gc-prj9m, resource: bindings, ignored listing per whitelist Jun 15 12:06:00.738: INFO: namespace e2e-tests-gc-prj9m deletion completed in 23.02501994s • [SLOW TEST:53.740 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:06:00.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-m7rlv STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 15 12:06:02.171: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 15 12:06:36.495: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.64:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-m7rlv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 12:06:36.495: INFO: >>> kubeConfig: /root/.kube/config I0615 12:06:36.526865 6 log.go:172] (0xc0022a2000) (0xc002398500) Create stream I0615 12:06:36.526897 6 log.go:172] (0xc0022a2000) (0xc002398500) Stream added, broadcasting: 1 I0615 12:06:36.528807 6 log.go:172] (0xc0022a2000) Reply frame received for 1 I0615 12:06:36.528858 6 log.go:172] (0xc0022a2000) (0xc000a94640) Create stream I0615 12:06:36.528877 6 log.go:172] (0xc0022a2000) (0xc000a94640) Stream added, broadcasting: 3 I0615 12:06:36.530001 6 log.go:172] (0xc0022a2000) Reply frame received for 3 I0615 12:06:36.530032 6 log.go:172] (0xc0022a2000) (0xc000a94780) Create stream I0615 12:06:36.530043 6 log.go:172] (0xc0022a2000) (0xc000a94780) Stream added, broadcasting: 5 I0615 12:06:36.530871 6 log.go:172] (0xc0022a2000) Reply frame received for 5 I0615 12:06:36.666477 6 log.go:172] (0xc0022a2000) Data frame received for 3 I0615 12:06:36.666518 6 log.go:172] (0xc000a94640) (3) Data frame handling I0615 12:06:36.666541 6 log.go:172] (0xc000a94640) (3) Data frame sent I0615 12:06:36.666560 6 log.go:172] (0xc0022a2000) Data frame received for 3 I0615 12:06:36.666576 6 log.go:172] (0xc000a94640) (3) Data frame handling I0615 12:06:36.666922 6 log.go:172] (0xc0022a2000) Data frame received for 5 I0615 12:06:36.666950 6 log.go:172] (0xc000a94780) (5) Data frame handling I0615 12:06:36.667890 6 log.go:172] (0xc0022a2000) Data frame received for 1 I0615 12:06:36.667924 6 log.go:172] (0xc002398500) (1) Data frame handling I0615 12:06:36.667965 6 log.go:172] (0xc002398500) (1) Data frame sent I0615 12:06:36.667989 6 log.go:172] (0xc0022a2000) (0xc002398500) Stream removed, broadcasting: 1 I0615 12:06:36.668098 6 log.go:172] (0xc0022a2000) (0xc002398500) Stream removed, broadcasting: 1 I0615 12:06:36.668125 6 log.go:172] (0xc0022a2000) (0xc000a94640) Stream removed, broadcasting: 3 I0615 12:06:36.668224 6 log.go:172] (0xc0022a2000) Go away received I0615 12:06:36.668388 6 log.go:172] (0xc0022a2000) (0xc000a94780) Stream removed, broadcasting: 5 Jun 15 12:06:36.668: INFO: Found all expected endpoints: [netserver-0] Jun 15 12:06:36.670: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.24:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-m7rlv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 12:06:36.670: INFO: >>> kubeConfig: /root/.kube/config I0615 12:06:36.696626 6 log.go:172] (0xc000c99600) (0xc000a94a00) Create stream I0615 12:06:36.696658 6 log.go:172] (0xc000c99600) (0xc000a94a00) Stream added, broadcasting: 1 I0615 12:06:36.698263 6 log.go:172] (0xc000c99600) Reply frame received for 1 I0615 12:06:36.698298 6 log.go:172] (0xc000c99600) (0xc000a94aa0) Create stream I0615 12:06:36.698313 6 log.go:172] (0xc000c99600) (0xc000a94aa0) Stream added, broadcasting: 3 I0615 12:06:36.698856 6 log.go:172] (0xc000c99600) Reply frame received for 3 I0615 12:06:36.698874 6 log.go:172] (0xc000c99600) (0xc001e74000) Create stream I0615 12:06:36.698880 6 log.go:172] (0xc000c99600) (0xc001e74000) Stream added, broadcasting: 5 I0615 12:06:36.699358 6 log.go:172] (0xc000c99600) Reply frame received for 5 I0615 12:06:36.748462 6 log.go:172] (0xc000c99600) Data frame received for 3 I0615 12:06:36.748477 6 log.go:172] (0xc000a94aa0) (3) Data frame handling I0615 12:06:36.748489 6 log.go:172] (0xc000a94aa0) (3) Data frame sent I0615 12:06:36.748495 6 log.go:172] (0xc000c99600) Data frame received for 3 I0615 12:06:36.748501 6 log.go:172] (0xc000a94aa0) (3) Data frame handling I0615 12:06:36.748799 6 log.go:172] (0xc000c99600) Data frame received for 5 I0615 12:06:36.748823 6 log.go:172] (0xc001e74000) (5) Data frame handling I0615 12:06:36.750021 6 log.go:172] (0xc000c99600) Data frame received for 1 I0615 12:06:36.750029 6 log.go:172] (0xc000a94a00) (1) Data frame handling I0615 12:06:36.750034 6 log.go:172] (0xc000a94a00) (1) Data frame sent I0615 12:06:36.750041 6 log.go:172] (0xc000c99600) (0xc000a94a00) Stream removed, broadcasting: 1 I0615 12:06:36.750074 6 log.go:172] (0xc000c99600) Go away received I0615 12:06:36.750121 6 log.go:172] (0xc000c99600) (0xc000a94a00) Stream removed, broadcasting: 1 I0615 12:06:36.750139 6 log.go:172] (0xc000c99600) (0xc000a94aa0) Stream removed, broadcasting: 3 I0615 12:06:36.750155 6 log.go:172] (0xc000c99600) (0xc001e74000) Stream removed, broadcasting: 5 Jun 15 12:06:36.750: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:06:36.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-m7rlv" for this suite. Jun 15 12:07:03.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:07:03.040: INFO: namespace: e2e-tests-pod-network-test-m7rlv, resource: bindings, ignored listing per whitelist Jun 15 12:07:03.073: INFO: namespace e2e-tests-pod-network-test-m7rlv deletion completed in 26.149099276s • [SLOW TEST:62.335 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:07:03.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:07:03.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-nxbhk" to be "success or failure" Jun 15 12:07:03.180: INFO: Pod "downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311081ms Jun 15 12:07:05.407: INFO: Pod "downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230980578s Jun 15 12:07:07.410: INFO: Pod "downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.234131713s Jun 15 12:07:09.414: INFO: Pod "downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238169273s STEP: Saw pod success Jun 15 12:07:09.414: INFO: Pod "downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:07:09.417: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:07:09.552: INFO: Waiting for pod downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:07:09.582: INFO: Pod downwardapi-volume-b948d942-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:07:09.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nxbhk" for this suite. Jun 15 12:07:15.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:07:16.007: INFO: namespace: e2e-tests-downward-api-nxbhk, resource: bindings, ignored listing per whitelist Jun 15 12:07:16.033: INFO: namespace e2e-tests-downward-api-nxbhk deletion completed in 6.448568297s • [SLOW TEST:12.960 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:07:16.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-c11bddc6-af00-11ea-99db-0242ac11001b STEP: Creating secret with name secret-projected-all-test-volume-c11bddb6-af00-11ea-99db-0242ac11001b STEP: Creating a pod to test Check all projections for projected volume plugin Jun 15 12:07:16.308: INFO: Waiting up to 5m0s for pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-gjzvm" to be "success or failure" Jun 15 12:07:16.312: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966278ms Jun 15 12:07:18.356: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048023886s Jun 15 12:07:20.360: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052218401s Jun 15 12:07:22.364: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056226172s Jun 15 12:07:25.896: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.588705808s Jun 15 12:07:28.165: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.857523146s STEP: Saw pod success Jun 15 12:07:28.165: INFO: Pod "projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:07:28.168: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b container projected-all-volume-test: STEP: delete the pod Jun 15 12:07:28.256: INFO: Waiting for pod projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:07:28.290: INFO: Pod projected-volume-c11bdd89-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:07:28.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gjzvm" for this suite. Jun 15 12:07:44.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:07:44.428: INFO: namespace: e2e-tests-projected-gjzvm, resource: bindings, ignored listing per whitelist Jun 15 12:07:44.440: INFO: namespace e2e-tests-projected-gjzvm deletion completed in 16.146912972s • [SLOW TEST:28.407 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:07:44.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jun 15 12:07:45.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 15 12:07:55.656: INFO: stderr: "" Jun 15 12:07:55.656: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:07:55.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g62v4" for this suite. Jun 15 12:08:01.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:08:01.702: INFO: namespace: e2e-tests-kubectl-g62v4, resource: bindings, ignored listing per whitelist Jun 15 12:08:01.753: INFO: namespace e2e-tests-kubectl-g62v4 deletion completed in 6.092495335s • [SLOW TEST:17.312 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:08:01.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:08:01.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-qmpqd" to be "success or failure" Jun 15 12:08:01.919: INFO: Pod "downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.442004ms Jun 15 12:08:03.979: INFO: Pod "downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065356686s Jun 15 12:08:05.983: INFO: Pod "downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069459546s Jun 15 12:08:07.986: INFO: Pod "downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072561981s STEP: Saw pod success Jun 15 12:08:07.986: INFO: Pod "downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:08:07.988: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:08:08.033: INFO: Waiting for pod downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b to disappear Jun 15 12:08:08.063: INFO: Pod downwardapi-volume-dc4ca96b-af00-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:08:08.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qmpqd" for this suite. Jun 15 12:08:14.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:08:15.049: INFO: namespace: e2e-tests-downward-api-qmpqd, resource: bindings, ignored listing per whitelist Jun 15 12:08:15.049: INFO: namespace e2e-tests-downward-api-qmpqd deletion completed in 6.982634839s • [SLOW TEST:13.295 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:08:15.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jun 15 12:08:15.505: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:08:33.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-87l4w" for this suite. Jun 15 12:08:40.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:08:40.859: INFO: namespace: e2e-tests-init-container-87l4w, resource: bindings, ignored listing per whitelist Jun 15 12:08:40.926: INFO: namespace e2e-tests-init-container-87l4w deletion completed in 6.189987865s • [SLOW TEST:25.877 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:08:40.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:08:41.035: INFO: Creating deployment "nginx-deployment" Jun 15 12:08:41.040: INFO: Waiting for observed generation 1 Jun 15 12:08:43.714: INFO: Waiting for all required pods to come up Jun 15 12:08:43.717: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 15 12:09:33.928: INFO: Waiting for deployment "nginx-deployment" to complete Jun 15 12:09:33.955: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 15 12:09:33.960: INFO: Updating deployment nginx-deployment Jun 15 12:09:33.961: INFO: Waiting for observed generation 2 Jun 15 12:09:36.040: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 15 12:09:36.042: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 15 12:09:36.043: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 15 12:09:36.048: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 15 12:09:36.048: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 15 12:09:36.050: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 15 12:09:36.053: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 15 12:09:36.053: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 15 12:09:36.056: INFO: Updating deployment nginx-deployment Jun 15 12:09:36.056: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 15 12:09:36.427: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 15 12:09:36.766: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 15 12:09:37.173: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk7fg/deployments/nginx-deployment,UID:f39efa8d-af00-11ea-99e8-0242ac110002,ResourceVersion:16078752,Generation:3,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-15 12:09:34 +0000 UTC 2020-06-15 12:08:41 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-06-15 12:09:36 +0000 UTC 2020-06-15 12:09:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 15 12:09:37.249: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk7fg/replicasets/nginx-deployment-5c98f8fb5,UID:132ac81c-af01-11ea-99e8-0242ac110002,ResourceVersion:16078791,Generation:3,CreationTimestamp:2020-06-15 12:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f39efa8d-af00-11ea-99e8-0242ac110002 0xc00225c6e7 0xc00225c6e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 15 12:09:37.249: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 15 12:09:37.250: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xk7fg/replicasets/nginx-deployment-85ddf47c5d,UID:f3a74bba-af00-11ea-99e8-0242ac110002,ResourceVersion:16078776,Generation:3,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f39efa8d-af00-11ea-99e8-0242ac110002 0xc00225c927 0xc00225c928}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 15 12:09:37.607: INFO: Pod "nginx-deployment-5c98f8fb5-28vwm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-28vwm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-28vwm,UID:132e1ef0-af01-11ea-99e8-0242ac110002,ResourceVersion:16078705,Generation:0,CreationTimestamp:2020-06-15 12:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021946b7 0xc0021946b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002194730} {node.kubernetes.io/unreachable Exists NoExecute 0xc002194750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-15 12:09:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.607: INFO: Pod "nginx-deployment-5c98f8fb5-46x8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-46x8d,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-46x8d,UID:136599e5-af01-11ea-99e8-0242ac110002,ResourceVersion:16078727,Generation:0,CreationTimestamp:2020-06-15 12:09:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc002194ca7 0xc002194ca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002194d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002194d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-15 12:09:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.607: INFO: Pod "nginx-deployment-5c98f8fb5-5xd8l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5xd8l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-5xd8l,UID:14de39b9-af01-11ea-99e8-0242ac110002,ResourceVersion:16078772,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021950c7 0xc0021950c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195140} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.607: INFO: Pod "nginx-deployment-5c98f8fb5-7vg4m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7vg4m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-7vg4m,UID:14d6e6d5-af01-11ea-99e8-0242ac110002,ResourceVersion:16078758,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc002195497 0xc002195498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.607: INFO: Pod "nginx-deployment-5c98f8fb5-b2qk6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b2qk6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-b2qk6,UID:14de7999-af01-11ea-99e8-0242ac110002,ResourceVersion:16078782,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021955a7 0xc0021955a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.607: INFO: Pod "nginx-deployment-5c98f8fb5-gngfp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gngfp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-gngfp,UID:14de8101-af01-11ea-99e8-0242ac110002,ResourceVersion:16078780,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021956e7 0xc0021956e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-jncwf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jncwf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-jncwf,UID:14d6dad4-af01-11ea-99e8-0242ac110002,ResourceVersion:16078755,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc002195867 0xc002195868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021958e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-n5szx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n5szx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-n5szx,UID:14a320db-af01-11ea-99e8-0242ac110002,ResourceVersion:16078784,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021959e7 0xc0021959e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-15 12:09:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-pst9f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pst9f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-pst9f,UID:1368d7ec-af01-11ea-99e8-0242ac110002,ResourceVersion:16078735,Generation:0,CreationTimestamp:2020-06-15 12:09:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc002195b47 0xc002195b48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-15 12:09:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-qtlns" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qtlns,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-qtlns,UID:14de8d27-af01-11ea-99e8-0242ac110002,ResourceVersion:16078777,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc002195d47 0xc002195d48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002195f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002195f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-qw76j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qw76j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-qw76j,UID:132c0252-af01-11ea-99e8-0242ac110002,ResourceVersion:16078700,Generation:0,CreationTimestamp:2020-06-15 12:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc002195f97 0xc002195f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8010} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-15 12:09:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-rfhvp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rfhvp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-rfhvp,UID:14ff32b3-af01-11ea-99e8-0242ac110002,ResourceVersion:16078786,Generation:0,CreationTimestamp:2020-06-15 12:09:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021a80f7 0xc0021a80f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8170} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-5c98f8fb5-wmszv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wmszv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-5c98f8fb5-wmszv,UID:132e1eb5-af01-11ea-99e8-0242ac110002,ResourceVersion:16078716,Generation:0,CreationTimestamp:2020-06-15 12:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 132ac81c-af01-11ea-99e8-0242ac110002 0xc0021a8207 0xc0021a8208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a82a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:34 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-15 12:09:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-85ddf47c5d-2hb9r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2hb9r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-2hb9r,UID:14de9056-af01-11ea-99e8-0242ac110002,ResourceVersion:16078779,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8367 0xc0021a8368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a83e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.608: INFO: Pod "nginx-deployment-85ddf47c5d-5l7k4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5l7k4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-5l7k4,UID:14de84f3-af01-11ea-99e8-0242ac110002,ResourceVersion:16078778,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8477 0xc0021a8478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a84f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-7b7pb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7b7pb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-7b7pb,UID:f3b85bd3-af00-11ea-99e8-0242ac110002,ResourceVersion:16078666,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8587 0xc0021a8588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8600} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.72,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b1cbbfc577fa131cabd09bd35b296039237124beb1040780dcc60e7c9574bb2c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-9dzq6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dzq6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-9dzq6,UID:14d6e7ac-af01-11ea-99e8-0242ac110002,ResourceVersion:16078763,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a86e7 0xc0021a86e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8760} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-fp6gd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fp6gd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-fp6gd,UID:14d6ea6a-af01-11ea-99e8-0242ac110002,ResourceVersion:16078760,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a87f7 0xc0021a87f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-ft47h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ft47h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-ft47h,UID:14d6e869-af01-11ea-99e8-0242ac110002,ResourceVersion:16078757,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8907 0xc0021a8908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a89a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-fv7dh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fv7dh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-fv7dh,UID:f3ad8dc4-af00-11ea-99e8-0242ac110002,ResourceVersion:16078641,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8a17 0xc0021a8a18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.68,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://03e946307241cbe8fc4491ceb6a0b6ce6d9d109310f11257cff029073a3a117d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-jgt5k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jgt5k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-jgt5k,UID:f3b87229-af00-11ea-99e8-0242ac110002,ResourceVersion:16078655,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8b97 0xc0021a8b98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.31,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://628ff3388c17875d11f20f1b9129687d7a053d025bcc4469f3a2c09a3706d5ff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.609: INFO: Pod "nginx-deployment-85ddf47c5d-kcmkt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kcmkt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-kcmkt,UID:14de6d76-af01-11ea-99e8-0242ac110002,ResourceVersion:16078775,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8df7 0xc0021a8df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-n6zpn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n6zpn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-n6zpn,UID:f3b467e9-af00-11ea-99e8-0242ac110002,ResourceVersion:16078672,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a8f07 0xc0021a8f08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a8f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a8fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.71,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c189fa7001687a59a640b2af0d6ea064d6c1f3974a82cd02ad46293d7e4c76c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-ntb4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ntb4t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-ntb4t,UID:14de8b75-af01-11ea-99e8-0242ac110002,ResourceVersion:16078783,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9077 0xc0021a9078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a90f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-phgdb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-phgdb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-phgdb,UID:14a34b33-af01-11ea-99e8-0242ac110002,ResourceVersion:16078751,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9197 0xc0021a9198}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9210} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-pwtwv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pwtwv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-pwtwv,UID:14a3469d-af01-11ea-99e8-0242ac110002,ResourceVersion:16078796,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a92a7 0xc0021a92a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9320} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-06-15 12:09:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-qrx4s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qrx4s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-qrx4s,UID:14de947b-af01-11ea-99e8-0242ac110002,ResourceVersion:16078781,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a93f7 0xc0021a93f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9470} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:37 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-rnbxg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rnbxg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-rnbxg,UID:f3b13d94-af00-11ea-99e8-0242ac110002,ResourceVersion:16078649,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9527 0xc0021a9528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a95d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a95f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.27,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ad2af0af1a27f8ddf762132ed5747f7d502f4cfec38c3f78ff79733e6f1fcc5c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-swbzp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-swbzp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-swbzp,UID:f3b45ff4-af00-11ea-99e8-0242ac110002,ResourceVersion:16078669,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a96b7 0xc0021a96b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9730} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.29,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://333fb0a4dd2a94e0118fbed3d5ecb9dd8118c7c5e4cffd1ed51b0360fe6739b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.610: INFO: Pod "nginx-deployment-85ddf47c5d-tmv57" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tmv57,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-tmv57,UID:f3b4632e-af00-11ea-99e8-0242ac110002,ResourceVersion:16078642,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9827 0xc0021a9828}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a98a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a98c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.28,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ced0c65c6888503595389d1eb2eb3578a528910e9dc5f024a8a50c5c5d507ab4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.611: INFO: Pod "nginx-deployment-85ddf47c5d-trjj2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-trjj2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-trjj2,UID:f3b16c82-af00-11ea-99e8-0242ac110002,ResourceVersion:16078650,Generation:0,CreationTimestamp:2020-06-15 12:08:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9987 0xc0021a9988}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:08:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.69,StartTime:2020-06-15 12:08:41 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-15 12:09:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://71698bbde5b28849eb190c07c13fd6a248c20d1e22c3bd230994e15fdbc95849}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.611: INFO: Pod "nginx-deployment-85ddf47c5d-w2jws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w2jws,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-w2jws,UID:14d6e8e5-af01-11ea-99e8-0242ac110002,ResourceVersion:16078761,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9b37 0xc0021a9b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 15 12:09:37.611: INFO: Pod "nginx-deployment-85ddf47c5d-xfc2n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xfc2n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-xk7fg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-xk7fg/pods/nginx-deployment-85ddf47c5d-xfc2n,UID:14731da7-af01-11ea-99e8-0242ac110002,ResourceVersion:16078788,Generation:0,CreationTimestamp:2020-06-15 12:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f3a74bba-af00-11ea-99e8-0242ac110002 0xc0021a9e07 0xc0021a9e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fnrl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnrl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-fnrl5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a9e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a9ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:09:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-06-15 12:09:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:09:37.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-xk7fg" for this suite. Jun 15 12:10:04.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:10:04.098: INFO: namespace: e2e-tests-deployment-xk7fg, resource: bindings, ignored listing per whitelist Jun 15 12:10:04.131: INFO: namespace e2e-tests-deployment-xk7fg deletion completed in 26.433686461s • [SLOW TEST:83.205 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:10:04.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:10:05.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-wqw8p" to be "success or failure" Jun 15 12:10:05.541: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 284.464593ms Jun 15 12:10:07.996: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739736184s Jun 15 12:10:10.035: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.778898003s Jun 15 12:10:12.043: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.787029899s Jun 15 12:10:14.048: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 8.791181691s Jun 15 12:10:16.058: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 10.801165935s Jun 15 12:10:18.060: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 12.804085565s Jun 15 12:10:20.064: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 14.807728892s Jun 15 12:10:22.071: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.814274669s STEP: Saw pod success Jun 15 12:10:22.071: INFO: Pod "downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:10:22.072: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:10:22.456: INFO: Waiting for pod downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:10:22.598: INFO: Pod downwardapi-volume-25cb2a8e-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:10:22.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wqw8p" for this suite. Jun 15 12:10:28.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:10:28.833: INFO: namespace: e2e-tests-downward-api-wqw8p, resource: bindings, ignored listing per whitelist Jun 15 12:10:28.863: INFO: namespace e2e-tests-downward-api-wqw8p deletion completed in 6.262808589s • [SLOW TEST:24.732 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:10:28.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-34488678-af01-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 12:10:29.558: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-hr9wt" to be "success or failure" Jun 15 12:10:29.613: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.970863ms Jun 15 12:10:31.616: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057918503s Jun 15 12:10:33.619: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061270283s Jun 15 12:10:35.785: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22651667s Jun 15 12:10:37.788: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 8.229733883s Jun 15 12:10:39.976: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.418414977s STEP: Saw pod success Jun 15 12:10:39.977: INFO: Pod "pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:10:40.132: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jun 15 12:10:40.219: INFO: Waiting for pod pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:10:40.223: INFO: Pod pod-projected-secrets-3448dfed-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:10:40.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hr9wt" for this suite. Jun 15 12:10:46.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:10:46.303: INFO: namespace: e2e-tests-projected-hr9wt, resource: bindings, ignored listing per whitelist Jun 15 12:10:46.347: INFO: namespace e2e-tests-projected-hr9wt deletion completed in 6.121426216s • [SLOW TEST:17.484 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:10:46.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jun 15 12:10:47.903: INFO: Waiting up to 5m0s for pod "pod-3edfbd3a-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-mhgdh" to be "success or failure" Jun 15 12:10:47.983: INFO: Pod "pod-3edfbd3a-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 79.931195ms Jun 15 12:10:50.096: INFO: Pod "pod-3edfbd3a-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193086893s Jun 15 12:10:52.195: INFO: Pod "pod-3edfbd3a-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291696818s Jun 15 12:10:54.197: INFO: Pod "pod-3edfbd3a-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294528868s STEP: Saw pod success Jun 15 12:10:54.198: INFO: Pod "pod-3edfbd3a-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:10:54.199: INFO: Trying to get logs from node hunter-worker2 pod pod-3edfbd3a-af01-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:10:54.578: INFO: Waiting for pod pod-3edfbd3a-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:10:54.607: INFO: Pod pod-3edfbd3a-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:10:54.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mhgdh" for this suite. Jun 15 12:11:30.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:11:31.526: INFO: namespace: e2e-tests-emptydir-mhgdh, resource: bindings, ignored listing per whitelist Jun 15 12:11:31.533: INFO: namespace e2e-tests-emptydir-mhgdh deletion completed in 36.747485107s • [SLOW TEST:45.186 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:11:31.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 15 12:11:31.742: INFO: Waiting up to 5m0s for pod "downward-api-595dbce2-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-sqt86" to be "success or failure" Jun 15 12:11:31.756: INFO: Pod "downward-api-595dbce2-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.389696ms Jun 15 12:11:33.760: INFO: Pod "downward-api-595dbce2-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018358321s Jun 15 12:11:35.763: INFO: Pod "downward-api-595dbce2-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.021637087s Jun 15 12:11:37.766: INFO: Pod "downward-api-595dbce2-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024277673s STEP: Saw pod success Jun 15 12:11:37.766: INFO: Pod "downward-api-595dbce2-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:11:37.768: INFO: Trying to get logs from node hunter-worker pod downward-api-595dbce2-af01-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 12:11:37.782: INFO: Waiting for pod downward-api-595dbce2-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:11:37.786: INFO: Pod downward-api-595dbce2-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:11:37.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sqt86" for this suite. Jun 15 12:11:43.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:11:43.847: INFO: namespace: e2e-tests-downward-api-sqt86, resource: bindings, ignored listing per whitelist Jun 15 12:11:43.908: INFO: namespace e2e-tests-downward-api-sqt86 deletion completed in 6.119276744s • [SLOW TEST:12.374 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:11:43.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-60b4f289-af01-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:11:44.118: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-zcx7p" to be "success or failure" Jun 15 12:11:44.135: INFO: Pod "pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.278426ms Jun 15 12:11:46.139: INFO: Pod "pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02066735s Jun 15 12:11:48.288: INFO: Pod "pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170053935s Jun 15 12:11:50.292: INFO: Pod "pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174061456s STEP: Saw pod success Jun 15 12:11:50.292: INFO: Pod "pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:11:50.295: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 12:11:50.329: INFO: Waiting for pod pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:11:50.356: INFO: Pod pod-projected-configmaps-60b70811-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:11:50.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zcx7p" for this suite. Jun 15 12:11:58.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:11:58.408: INFO: namespace: e2e-tests-projected-zcx7p, resource: bindings, ignored listing per whitelist Jun 15 12:11:58.444: INFO: namespace e2e-tests-projected-zcx7p deletion completed in 8.085125291s • [SLOW TEST:14.537 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:11:58.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-69564be8-af01-11ea-99db-0242ac11001b STEP: Creating secret with name s-test-opt-upd-69564c5f-af01-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-69564be8-af01-11ea-99db-0242ac11001b STEP: Updating secret s-test-opt-upd-69564c5f-af01-11ea-99db-0242ac11001b STEP: Creating secret with name s-test-opt-create-69564c8e-af01-11ea-99db-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:12:10.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g5kwt" for this suite. Jun 15 12:12:32.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:12:32.749: INFO: namespace: e2e-tests-projected-g5kwt, resource: bindings, ignored listing per whitelist Jun 15 12:12:32.788: INFO: namespace e2e-tests-projected-g5kwt deletion completed in 22.072573593s • [SLOW TEST:34.343 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:12:32.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7dcea197-af01-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:12:32.900: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-4gk9k" to be "success or failure" Jun 15 12:12:32.914: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.382181ms Jun 15 12:12:38.163: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.263135476s Jun 15 12:12:40.167: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.267413898s Jun 15 12:12:42.265: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.36526187s Jun 15 12:12:45.345: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.445088332s Jun 15 12:12:47.675: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.77501261s Jun 15 12:12:49.679: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.77871166s Jun 15 12:12:51.683: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.782715275s Jun 15 12:12:53.686: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.786365165s Jun 15 12:12:56.087: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.186836553s Jun 15 12:13:00.938: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.038190915s Jun 15 12:13:02.942: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.041731525s Jun 15 12:13:04.945: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.045651193s Jun 15 12:13:06.948: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.048305007s Jun 15 12:13:08.952: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 36.052087941s Jun 15 12:13:10.956: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.055860513s STEP: Saw pod success Jun 15 12:13:10.956: INFO: Pod "pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:13:10.959: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 12:13:11.011: INFO: Waiting for pod pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:13:11.043: INFO: Pod pod-projected-configmaps-7dcf3b8e-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:13:11.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4gk9k" for this suite. Jun 15 12:13:19.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:13:19.068: INFO: namespace: e2e-tests-projected-4gk9k, resource: bindings, ignored listing per whitelist Jun 15 12:13:19.286: INFO: namespace e2e-tests-projected-4gk9k deletion completed in 8.240313008s • [SLOW TEST:46.498 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:13:19.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 15 12:13:19.438: INFO: Waiting up to 5m0s for pod "pod-998ec9a6-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-9m2h2" to be "success or failure" Jun 15 12:13:19.491: INFO: Pod "pod-998ec9a6-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.520579ms Jun 15 12:13:21.494: INFO: Pod "pod-998ec9a6-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056479406s Jun 15 12:13:23.580: INFO: Pod "pod-998ec9a6-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142442341s STEP: Saw pod success Jun 15 12:13:23.580: INFO: Pod "pod-998ec9a6-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:13:23.756: INFO: Trying to get logs from node hunter-worker pod pod-998ec9a6-af01-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:13:23.845: INFO: Waiting for pod pod-998ec9a6-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:13:23.930: INFO: Pod pod-998ec9a6-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:13:23.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9m2h2" for this suite. Jun 15 12:13:32.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:13:32.038: INFO: namespace: e2e-tests-emptydir-9m2h2, resource: bindings, ignored listing per whitelist Jun 15 12:13:32.069: INFO: namespace e2e-tests-emptydir-9m2h2 deletion completed in 8.135003355s • [SLOW TEST:12.782 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:13:32.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jun 15 12:13:32.156: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix628826981/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:13:32.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6pl62" for this suite. Jun 15 12:13:38.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:13:38.289: INFO: namespace: e2e-tests-kubectl-6pl62, resource: bindings, ignored listing per whitelist Jun 15 12:13:38.344: INFO: namespace e2e-tests-kubectl-6pl62 deletion completed in 6.101229073s • [SLOW TEST:6.275 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:13:38.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:13:38.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-hz2l8" to be "success or failure" Jun 15 12:13:38.451: INFO: Pod "downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.84392ms Jun 15 12:13:40.590: INFO: Pod "downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177018954s Jun 15 12:13:42.607: INFO: Pod "downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193375166s Jun 15 12:13:44.655: INFO: Pod "downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.241365508s STEP: Saw pod success Jun 15 12:13:44.655: INFO: Pod "downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:13:44.656: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:13:44.824: INFO: Waiting for pod downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:13:45.063: INFO: Pod downwardapi-volume-a4de47d1-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:13:45.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hz2l8" for this suite. Jun 15 12:13:51.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:13:51.210: INFO: namespace: e2e-tests-projected-hz2l8, resource: bindings, ignored listing per whitelist Jun 15 12:13:51.215: INFO: namespace e2e-tests-projected-hz2l8 deletion completed in 6.148834524s • [SLOW TEST:12.871 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:13:51.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-ac952b57-af01-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 12:13:51.438: INFO: Waiting up to 5m0s for pod "pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-qpwc5" to be "success or failure" Jun 15 12:13:51.443: INFO: Pod "pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448352ms Jun 15 12:13:53.446: INFO: Pod "pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007493541s Jun 15 12:13:55.449: INFO: Pod "pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011000213s Jun 15 12:13:57.452: INFO: Pod "pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013849283s STEP: Saw pod success Jun 15 12:13:57.452: INFO: Pod "pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:13:57.455: INFO: Trying to get logs from node hunter-worker pod pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 12:13:57.510: INFO: Waiting for pod pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b to disappear Jun 15 12:13:57.589: INFO: Pod pod-secrets-aca17eea-af01-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:13:57.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qpwc5" for this suite. Jun 15 12:14:03.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:14:03.700: INFO: namespace: e2e-tests-secrets-qpwc5, resource: bindings, ignored listing per whitelist Jun 15 12:14:03.711: INFO: namespace e2e-tests-secrets-qpwc5 deletion completed in 6.119095186s STEP: Destroying namespace "e2e-tests-secret-namespace-d6v6c" for this suite. Jun 15 12:14:09.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:14:09.764: INFO: namespace: e2e-tests-secret-namespace-d6v6c, resource: bindings, ignored listing per whitelist Jun 15 12:14:09.804: INFO: namespace e2e-tests-secret-namespace-d6v6c deletion completed in 6.092191253s • [SLOW TEST:18.588 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:14:09.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0615 12:14:20.655267 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 15 12:14:20.655: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:14:20.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zhlt2" for this suite. Jun 15 12:14:27.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:14:27.662: INFO: namespace: e2e-tests-gc-zhlt2, resource: bindings, ignored listing per whitelist Jun 15 12:14:27.702: INFO: namespace e2e-tests-gc-zhlt2 deletion completed in 6.626394831s • [SLOW TEST:17.899 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:14:27.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-stg48 Jun 15 12:14:36.653: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-stg48 STEP: checking the pod's current state and verifying that restartCount is present Jun 15 12:14:37.010: INFO: Initial restart count of pod liveness-exec is 0 Jun 15 12:15:24.324: INFO: Restart count of pod e2e-tests-container-probe-stg48/liveness-exec is now 1 (47.31369913s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:15:24.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-stg48" for this suite. Jun 15 12:15:30.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:15:30.568: INFO: namespace: e2e-tests-container-probe-stg48, resource: bindings, ignored listing per whitelist Jun 15 12:15:30.613: INFO: namespace e2e-tests-container-probe-stg48 deletion completed in 6.116879333s • [SLOW TEST:62.911 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:15:30.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:15:30.797: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.395498ms) Jun 15 12:15:30.799: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.76666ms) Jun 15 12:15:30.802: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.483144ms) Jun 15 12:15:30.804: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.065376ms) Jun 15 12:15:30.807: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.928877ms) Jun 15 12:15:30.809: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.267609ms) Jun 15 12:15:30.812: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.873977ms) Jun 15 12:15:30.815: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.391673ms) Jun 15 12:15:30.817: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.476696ms) Jun 15 12:15:30.819: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.423262ms) Jun 15 12:15:30.822: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.367558ms) Jun 15 12:15:30.825: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.500113ms) Jun 15 12:15:30.828: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.559598ms) Jun 15 12:15:30.831: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.896379ms) Jun 15 12:15:30.834: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.111608ms) Jun 15 12:15:30.837: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.05209ms) Jun 15 12:15:30.840: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.591264ms) Jun 15 12:15:30.843: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.73163ms) Jun 15 12:15:30.845: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.900708ms) Jun 15 12:15:30.848: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.697446ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:15:30.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-6wjm7" for this suite. Jun 15 12:15:37.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:15:37.079: INFO: namespace: e2e-tests-proxy-6wjm7, resource: bindings, ignored listing per whitelist Jun 15 12:15:37.121: INFO: namespace e2e-tests-proxy-6wjm7 deletion completed in 6.269615188s • [SLOW TEST:6.507 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:15:37.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dxxvv STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 15 12:15:37.215: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 15 12:16:21.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.94:8080/dial?request=hostName&protocol=udp&host=10.244.2.50&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-dxxvv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 12:16:21.313: INFO: >>> kubeConfig: /root/.kube/config I0615 12:16:21.339953 6 log.go:172] (0xc0022a2000) (0xc0012041e0) Create stream I0615 12:16:21.339977 6 log.go:172] (0xc0022a2000) (0xc0012041e0) Stream added, broadcasting: 1 I0615 12:16:21.341456 6 log.go:172] (0xc0022a2000) Reply frame received for 1 I0615 12:16:21.341507 6 log.go:172] (0xc0022a2000) (0xc002292280) Create stream I0615 12:16:21.341527 6 log.go:172] (0xc0022a2000) (0xc002292280) Stream added, broadcasting: 3 I0615 12:16:21.342783 6 log.go:172] (0xc0022a2000) Reply frame received for 3 I0615 12:16:21.342820 6 log.go:172] (0xc0022a2000) (0xc0020f8000) Create stream I0615 12:16:21.342833 6 log.go:172] (0xc0022a2000) (0xc0020f8000) Stream added, broadcasting: 5 I0615 12:16:21.343812 6 log.go:172] (0xc0022a2000) Reply frame received for 5 I0615 12:16:21.455186 6 log.go:172] (0xc0022a2000) Data frame received for 3 I0615 12:16:21.455218 6 log.go:172] (0xc002292280) (3) Data frame handling I0615 12:16:21.455242 6 log.go:172] (0xc002292280) (3) Data frame sent I0615 12:16:21.456276 6 log.go:172] (0xc0022a2000) Data frame received for 5 I0615 12:16:21.456317 6 log.go:172] (0xc0020f8000) (5) Data frame handling I0615 12:16:21.456377 6 log.go:172] (0xc0022a2000) Data frame received for 3 I0615 12:16:21.456401 6 log.go:172] (0xc002292280) (3) Data frame handling I0615 12:16:21.460447 6 log.go:172] (0xc0022a2000) Data frame received for 1 I0615 12:16:21.460485 6 log.go:172] (0xc0012041e0) (1) Data frame handling I0615 12:16:21.460889 6 log.go:172] (0xc0012041e0) (1) Data frame sent I0615 12:16:21.461001 6 log.go:172] (0xc0022a2000) (0xc0012041e0) Stream removed, broadcasting: 1 I0615 12:16:21.461056 6 log.go:172] (0xc0022a2000) Go away received I0615 12:16:21.461393 6 log.go:172] (0xc0022a2000) (0xc0012041e0) Stream removed, broadcasting: 1 I0615 12:16:21.461478 6 log.go:172] (0xc0022a2000) (0xc002292280) Stream removed, broadcasting: 3 I0615 12:16:21.462192 6 log.go:172] (0xc0022a2000) (0xc0020f8000) Stream removed, broadcasting: 5 Jun 15 12:16:21.462: INFO: Waiting for endpoints: map[] Jun 15 12:16:21.468: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.94:8080/dial?request=hostName&protocol=udp&host=10.244.1.93&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-dxxvv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 12:16:21.468: INFO: >>> kubeConfig: /root/.kube/config I0615 12:16:21.490656 6 log.go:172] (0xc000c99600) (0xc0020f8460) Create stream I0615 12:16:21.490676 6 log.go:172] (0xc000c99600) (0xc0020f8460) Stream added, broadcasting: 1 I0615 12:16:21.492705 6 log.go:172] (0xc000c99600) Reply frame received for 1 I0615 12:16:21.492730 6 log.go:172] (0xc000c99600) (0xc0020f85a0) Create stream I0615 12:16:21.492739 6 log.go:172] (0xc000c99600) (0xc0020f85a0) Stream added, broadcasting: 3 I0615 12:16:21.493761 6 log.go:172] (0xc000c99600) Reply frame received for 3 I0615 12:16:21.493796 6 log.go:172] (0xc000c99600) (0xc0020f8640) Create stream I0615 12:16:21.493812 6 log.go:172] (0xc000c99600) (0xc0020f8640) Stream added, broadcasting: 5 I0615 12:16:21.494483 6 log.go:172] (0xc000c99600) Reply frame received for 5 I0615 12:16:21.572306 6 log.go:172] (0xc000c99600) Data frame received for 3 I0615 12:16:21.572340 6 log.go:172] (0xc0020f85a0) (3) Data frame handling I0615 12:16:21.572366 6 log.go:172] (0xc0020f85a0) (3) Data frame sent I0615 12:16:21.573436 6 log.go:172] (0xc000c99600) Data frame received for 3 I0615 12:16:21.573450 6 log.go:172] (0xc0020f85a0) (3) Data frame handling I0615 12:16:21.573474 6 log.go:172] (0xc000c99600) Data frame received for 5 I0615 12:16:21.573487 6 log.go:172] (0xc0020f8640) (5) Data frame handling I0615 12:16:21.575074 6 log.go:172] (0xc000c99600) Data frame received for 1 I0615 12:16:21.575125 6 log.go:172] (0xc0020f8460) (1) Data frame handling I0615 12:16:21.575161 6 log.go:172] (0xc0020f8460) (1) Data frame sent I0615 12:16:21.575191 6 log.go:172] (0xc000c99600) (0xc0020f8460) Stream removed, broadcasting: 1 I0615 12:16:21.575282 6 log.go:172] (0xc000c99600) (0xc0020f8460) Stream removed, broadcasting: 1 I0615 12:16:21.575302 6 log.go:172] (0xc000c99600) (0xc0020f85a0) Stream removed, broadcasting: 3 I0615 12:16:21.575317 6 log.go:172] (0xc000c99600) (0xc0020f8640) Stream removed, broadcasting: 5 Jun 15 12:16:21.575: INFO: Waiting for endpoints: map[] I0615 12:16:21.575411 6 log.go:172] (0xc000c99600) Go away received [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:16:21.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dxxvv" for this suite. Jun 15 12:16:46.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:16:47.526: INFO: namespace: e2e-tests-pod-network-test-dxxvv, resource: bindings, ignored listing per whitelist Jun 15 12:16:47.529: INFO: namespace e2e-tests-pod-network-test-dxxvv deletion completed in 25.950122637s • [SLOW TEST:70.408 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:16:47.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jun 15 12:16:48.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:16:49.628: INFO: stderr: "" Jun 15 12:16:49.628: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 15 12:16:49.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:16:50.197: INFO: stderr: "" Jun 15 12:16:50.197: INFO: stdout: "update-demo-nautilus-7wtnn update-demo-nautilus-cnwkh " Jun 15 12:16:50.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wtnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:16:51.010: INFO: stderr: "" Jun 15 12:16:51.010: INFO: stdout: "" Jun 15 12:16:51.010: INFO: update-demo-nautilus-7wtnn is created but not running Jun 15 12:16:56.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:16:56.815: INFO: stderr: "" Jun 15 12:16:56.815: INFO: stdout: "update-demo-nautilus-7wtnn update-demo-nautilus-cnwkh " Jun 15 12:16:56.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wtnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:16:57.660: INFO: stderr: "" Jun 15 12:16:57.660: INFO: stdout: "" Jun 15 12:16:57.660: INFO: update-demo-nautilus-7wtnn is created but not running Jun 15 12:17:02.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:02.767: INFO: stderr: "" Jun 15 12:17:02.767: INFO: stdout: "update-demo-nautilus-7wtnn update-demo-nautilus-cnwkh " Jun 15 12:17:02.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wtnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:02.856: INFO: stderr: "" Jun 15 12:17:02.856: INFO: stdout: "true" Jun 15 12:17:02.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wtnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:02.976: INFO: stderr: "" Jun 15 12:17:02.976: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 12:17:02.976: INFO: validating pod update-demo-nautilus-7wtnn Jun 15 12:17:02.988: INFO: got data: { "image": "nautilus.jpg" } Jun 15 12:17:02.988: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 12:17:02.988: INFO: update-demo-nautilus-7wtnn is verified up and running Jun 15 12:17:02.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnwkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:03.073: INFO: stderr: "" Jun 15 12:17:03.073: INFO: stdout: "true" Jun 15 12:17:03.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnwkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:03.180: INFO: stderr: "" Jun 15 12:17:03.180: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 15 12:17:03.180: INFO: validating pod update-demo-nautilus-cnwkh Jun 15 12:17:03.190: INFO: got data: { "image": "nautilus.jpg" } Jun 15 12:17:03.190: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 15 12:17:03.190: INFO: update-demo-nautilus-cnwkh is verified up and running STEP: using delete to clean up resources Jun 15 12:17:03.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:03.315: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 15 12:17:03.315: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 15 12:17:03.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-pwqvk' Jun 15 12:17:03.536: INFO: stderr: "No resources found.\n" Jun 15 12:17:03.536: INFO: stdout: "" Jun 15 12:17:03.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-pwqvk -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 15 12:17:04.046: INFO: stderr: "" Jun 15 12:17:04.046: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:17:04.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pwqvk" for this suite. Jun 15 12:17:10.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:17:10.124: INFO: namespace: e2e-tests-kubectl-pwqvk, resource: bindings, ignored listing per whitelist Jun 15 12:17:10.149: INFO: namespace e2e-tests-kubectl-pwqvk deletion completed in 6.099081731s • [SLOW TEST:22.620 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:17:10.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jun 15 12:17:10.828: INFO: Waiting up to 5m0s for pod "client-containers-23731766-af02-11ea-99db-0242ac11001b" in namespace "e2e-tests-containers-dd5jw" to be "success or failure" Jun 15 12:17:10.841: INFO: Pod "client-containers-23731766-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.428476ms Jun 15 12:17:12.844: INFO: Pod "client-containers-23731766-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016023616s Jun 15 12:17:14.916: INFO: Pod "client-containers-23731766-af02-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08848515s STEP: Saw pod success Jun 15 12:17:14.916: INFO: Pod "client-containers-23731766-af02-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:17:14.920: INFO: Trying to get logs from node hunter-worker pod client-containers-23731766-af02-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:17:15.275: INFO: Waiting for pod client-containers-23731766-af02-11ea-99db-0242ac11001b to disappear Jun 15 12:17:15.278: INFO: Pod client-containers-23731766-af02-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:17:15.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-dd5jw" for this suite. Jun 15 12:17:21.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:17:21.356: INFO: namespace: e2e-tests-containers-dd5jw, resource: bindings, ignored listing per whitelist Jun 15 12:17:21.548: INFO: namespace e2e-tests-containers-dd5jw deletion completed in 6.257410443s • [SLOW TEST:11.399 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:17:21.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-2ab1442f-af02-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:17:22.981: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-49j5g" to be "success or failure" Jun 15 12:17:22.992: INFO: Pod "pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.937863ms Jun 15 12:17:24.995: INFO: Pod "pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014210487s Jun 15 12:17:26.999: INFO: Pod "pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017771314s Jun 15 12:17:29.002: INFO: Pod "pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020954546s STEP: Saw pod success Jun 15 12:17:29.002: INFO: Pod "pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:17:29.004: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jun 15 12:17:29.035: INFO: Waiting for pod pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b to disappear Jun 15 12:17:29.052: INFO: Pod pod-projected-configmaps-2ab1ed47-af02-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:17:29.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-49j5g" for this suite. Jun 15 12:17:35.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:17:35.088: INFO: namespace: e2e-tests-projected-49j5g, resource: bindings, ignored listing per whitelist Jun 15 12:17:35.128: INFO: namespace e2e-tests-projected-49j5g deletion completed in 6.072885836s • [SLOW TEST:13.579 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:17:35.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-31ffe322-af02-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 12:17:35.224: INFO: Waiting up to 5m0s for pod "pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-xcxhj" to be "success or failure" Jun 15 12:17:35.233: INFO: Pod "pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.240695ms Jun 15 12:17:37.236: INFO: Pod "pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01192712s Jun 15 12:17:39.239: INFO: Pod "pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015040221s Jun 15 12:17:41.872: INFO: Pod "pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.64795855s STEP: Saw pod success Jun 15 12:17:41.872: INFO: Pod "pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:17:41.876: INFO: Trying to get logs from node hunter-worker pod pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 12:17:42.091: INFO: Waiting for pod pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b to disappear Jun 15 12:17:42.095: INFO: Pod pod-secrets-32030a1a-af02-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:17:42.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xcxhj" for this suite. Jun 15 12:17:48.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:17:48.210: INFO: namespace: e2e-tests-secrets-xcxhj, resource: bindings, ignored listing per whitelist Jun 15 12:17:48.222: INFO: namespace e2e-tests-secrets-xcxhj deletion completed in 6.124325771s • [SLOW TEST:13.095 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:17:48.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-wpw59 Jun 15 12:18:02.366: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-wpw59 STEP: checking the pod's current state and verifying that restartCount is present Jun 15 12:18:02.368: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:22:03.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wpw59" for this suite. Jun 15 12:22:14.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:22:14.263: INFO: namespace: e2e-tests-container-probe-wpw59, resource: bindings, ignored listing per whitelist Jun 15 12:22:14.266: INFO: namespace e2e-tests-container-probe-wpw59 deletion completed in 10.738919752s • [SLOW TEST:266.043 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:22:14.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 15 12:22:14.935: INFO: Waiting up to 5m0s for pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-l5tjr" to be "success or failure" Jun 15 12:22:15.288: INFO: Pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 353.098009ms Jun 15 12:22:17.387: INFO: Pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452155309s Jun 15 12:22:20.126: INFO: Pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.190658591s Jun 15 12:22:22.129: INFO: Pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 7.19423089s Jun 15 12:22:24.133: INFO: Pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.197707319s STEP: Saw pod success Jun 15 12:22:24.133: INFO: Pod "pod-d8a1968f-af02-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:22:24.135: INFO: Trying to get logs from node hunter-worker pod pod-d8a1968f-af02-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:22:25.317: INFO: Waiting for pod pod-d8a1968f-af02-11ea-99db-0242ac11001b to disappear Jun 15 12:22:25.320: INFO: Pod pod-d8a1968f-af02-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:22:25.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l5tjr" for this suite. Jun 15 12:22:35.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:22:35.476: INFO: namespace: e2e-tests-emptydir-l5tjr, resource: bindings, ignored listing per whitelist Jun 15 12:22:35.502: INFO: namespace e2e-tests-emptydir-l5tjr deletion completed in 10.178029402s • [SLOW TEST:21.236 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:22:35.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 15 12:22:41.716: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e51bf106-af02-11ea-99db-0242ac11001b,GenerateName:,Namespace:e2e-tests-events-8x7ph,SelfLink:/api/v1/namespaces/e2e-tests-events-8x7ph/pods/send-events-e51bf106-af02-11ea-99db-0242ac11001b,UID:e51c5881-af02-11ea-99e8-0242ac110002,ResourceVersion:16081038,Generation:0,CreationTimestamp:2020-06-15 12:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 683054617,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tq27j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tq27j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-tq27j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ac9dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ac9de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:22:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:22:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:22:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:22:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.54,StartTime:2020-06-15 12:22:35 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-15 12:22:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://494fb44b4e94840ab4e6c88e589e86d65e7c06d0555a015c02d37a6c9cc2d853}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 15 12:22:43.720: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 15 12:22:46.107: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:22:46.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-8x7ph" for this suite. Jun 15 12:23:38.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:23:38.229: INFO: namespace: e2e-tests-events-8x7ph, resource: bindings, ignored listing per whitelist Jun 15 12:23:38.274: INFO: namespace e2e-tests-events-8x7ph deletion completed in 52.132682904s • [SLOW TEST:62.772 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:23:38.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 15 12:23:38.411: INFO: Waiting up to 5m0s for pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-7g44m" to be "success or failure" Jun 15 12:23:38.416: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529772ms Jun 15 12:23:40.611: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200045024s Jun 15 12:23:42.615: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204239489s Jun 15 12:23:44.620: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208484537s Jun 15 12:23:47.190: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779151334s Jun 15 12:23:49.194: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 10.78330335s Jun 15 12:23:51.201: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.790243861s STEP: Saw pod success Jun 15 12:23:51.201: INFO: Pod "pod-0a7ee394-af03-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:23:51.204: INFO: Trying to get logs from node hunter-worker pod pod-0a7ee394-af03-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:23:51.221: INFO: Waiting for pod pod-0a7ee394-af03-11ea-99db-0242ac11001b to disappear Jun 15 12:23:51.232: INFO: Pod pod-0a7ee394-af03-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:23:51.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7g44m" for this suite. Jun 15 12:23:57.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:23:57.299: INFO: namespace: e2e-tests-emptydir-7g44m, resource: bindings, ignored listing per whitelist Jun 15 12:23:57.302: INFO: namespace e2e-tests-emptydir-7g44m deletion completed in 6.066994571s • [SLOW TEST:19.027 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:23:57.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:23:57.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-fpdd4" for this suite. Jun 15 12:24:03.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:24:03.466: INFO: namespace: e2e-tests-services-fpdd4, resource: bindings, ignored listing per whitelist Jun 15 12:24:03.498: INFO: namespace e2e-tests-services-fpdd4 deletion completed in 6.061662793s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.196 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:24:03.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-199e5b76-af03-11ea-99db-0242ac11001b STEP: Creating configMap with name cm-test-opt-upd-199e5c06-af03-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-199e5b76-af03-11ea-99db-0242ac11001b STEP: Updating configmap cm-test-opt-upd-199e5c06-af03-11ea-99db-0242ac11001b STEP: Creating configMap with name cm-test-opt-create-199e5c50-af03-11ea-99db-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:25:49.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8kswn" for this suite. Jun 15 12:26:35.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:26:35.313: INFO: namespace: e2e-tests-configmap-8kswn, resource: bindings, ignored listing per whitelist Jun 15 12:26:35.350: INFO: namespace e2e-tests-configmap-8kswn deletion completed in 45.982762128s • [SLOW TEST:151.852 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:26:35.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 15 12:26:36.026: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081559,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 15 12:26:36.027: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081559,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 15 12:26:46.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081578,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 15 12:26:46.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081578,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 15 12:26:56.073: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081598,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 15 12:26:56.074: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081598,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 15 12:27:06.079: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081618,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 15 12:27:06.079: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-a,UID:74356267-af03-11ea-99e8-0242ac110002,ResourceVersion:16081618,Generation:0,CreationTimestamp:2020-06-15 12:26:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 15 12:27:16.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-b,UID:8c3d51c1-af03-11ea-99e8-0242ac110002,ResourceVersion:16081638,Generation:0,CreationTimestamp:2020-06-15 12:27:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 15 12:27:16.085: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-b,UID:8c3d51c1-af03-11ea-99e8-0242ac110002,ResourceVersion:16081638,Generation:0,CreationTimestamp:2020-06-15 12:27:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 15 12:27:26.094: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-b,UID:8c3d51c1-af03-11ea-99e8-0242ac110002,ResourceVersion:16081658,Generation:0,CreationTimestamp:2020-06-15 12:27:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 15 12:27:26.094: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dddsz,SelfLink:/api/v1/namespaces/e2e-tests-watch-dddsz/configmaps/e2e-watch-test-configmap-b,UID:8c3d51c1-af03-11ea-99e8-0242ac110002,ResourceVersion:16081658,Generation:0,CreationTimestamp:2020-06-15 12:27:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:27:36.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-dddsz" for this suite. Jun 15 12:27:42.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:27:42.436: INFO: namespace: e2e-tests-watch-dddsz, resource: bindings, ignored listing per whitelist Jun 15 12:27:42.438: INFO: namespace e2e-tests-watch-dddsz deletion completed in 6.338298826s • [SLOW TEST:67.087 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:27:42.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:27:42.588: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 15 12:27:42.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:42.597: INFO: Number of nodes with available pods: 0 Jun 15 12:27:42.597: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:43.601: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:43.604: INFO: Number of nodes with available pods: 0 Jun 15 12:27:43.604: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:45.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:45.728: INFO: Number of nodes with available pods: 0 Jun 15 12:27:45.728: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:46.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:46.654: INFO: Number of nodes with available pods: 0 Jun 15 12:27:46.654: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:48.519: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:49.298: INFO: Number of nodes with available pods: 0 Jun 15 12:27:49.298: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:49.814: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:49.817: INFO: Number of nodes with available pods: 0 Jun 15 12:27:49.817: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:50.601: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:50.604: INFO: Number of nodes with available pods: 0 Jun 15 12:27:50.604: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:27:51.607: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:51.609: INFO: Number of nodes with available pods: 2 Jun 15 12:27:51.609: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 15 12:27:51.638: INFO: Wrong image for pod: daemon-set-4qt67. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:51.638: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:51.644: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:52.649: INFO: Wrong image for pod: daemon-set-4qt67. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:52.649: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:52.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:53.649: INFO: Wrong image for pod: daemon-set-4qt67. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:53.649: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:53.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:54.649: INFO: Wrong image for pod: daemon-set-4qt67. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:54.649: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:54.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:55.648: INFO: Wrong image for pod: daemon-set-4qt67. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:55.648: INFO: Pod daemon-set-4qt67 is not available Jun 15 12:27:55.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:55.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:56.647: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:27:56.647: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:56.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:57.648: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:27:57.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:57.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:58.740: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:27:58.740: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:58.744: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:27:59.648: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:27:59.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:27:59.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:01.595: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:01.595: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:01.600: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:03.730: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:03.730: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:04.318: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:04.686: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:04.686: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:04.690: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:05.648: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:05.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:05.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:06.722: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:06.722: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:06.734: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:07.648: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:07.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:07.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:08.675: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:08.675: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:08.678: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:09.648: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:09.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:09.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:10.671: INFO: Pod daemon-set-4q8ps is not available Jun 15 12:28:10.671: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:10.717: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:11.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:11.652: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:12.651: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:12.655: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:13.649: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:13.649: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:13.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:14.657: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:14.657: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:14.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:15.647: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:15.647: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:15.650: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:16.649: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:16.649: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:16.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:17.647: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:17.648: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:17.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:18.648: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:18.648: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:18.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:19.649: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:19.649: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:19.653: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:20.657: INFO: Wrong image for pod: daemon-set-db7ms. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 15 12:28:20.657: INFO: Pod daemon-set-db7ms is not available Jun 15 12:28:20.662: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:21.648: INFO: Pod daemon-set-4p82n is not available Jun 15 12:28:21.651: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 15 12:28:21.655: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:21.658: INFO: Number of nodes with available pods: 1 Jun 15 12:28:21.658: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:28:22.675: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:23.484: INFO: Number of nodes with available pods: 1 Jun 15 12:28:23.484: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:28:23.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:23.667: INFO: Number of nodes with available pods: 1 Jun 15 12:28:23.667: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:28:25.780: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:25.783: INFO: Number of nodes with available pods: 1 Jun 15 12:28:25.783: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:28:26.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:26.667: INFO: Number of nodes with available pods: 1 Jun 15 12:28:26.667: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:28:27.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:27.666: INFO: Number of nodes with available pods: 1 Jun 15 12:28:27.666: INFO: Node hunter-worker is running more than one daemon pod Jun 15 12:28:28.663: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 15 12:28:28.666: INFO: Number of nodes with available pods: 2 Jun 15 12:28:28.666: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-t5575, will wait for the garbage collector to delete the pods Jun 15 12:28:28.737: INFO: Deleting DaemonSet.extensions daemon-set took: 5.323651ms Jun 15 12:28:28.837: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.237343ms Jun 15 12:28:41.840: INFO: Number of nodes with available pods: 0 Jun 15 12:28:41.840: INFO: Number of running nodes: 0, number of available pods: 0 Jun 15 12:28:41.843: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t5575/daemonsets","resourceVersion":"16081882"},"items":null} Jun 15 12:28:41.848: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t5575/pods","resourceVersion":"16081882"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:28:41.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-t5575" for this suite. Jun 15 12:28:47.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:28:47.928: INFO: namespace: e2e-tests-daemonsets-t5575, resource: bindings, ignored listing per whitelist Jun 15 12:28:47.948: INFO: namespace e2e-tests-daemonsets-t5575 deletion completed in 6.072991615s • [SLOW TEST:65.510 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:28:47.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:28:48.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-8nfpj" to be "success or failure" Jun 15 12:28:48.088: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021365ms Jun 15 12:28:50.136: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054174626s Jun 15 12:28:53.600: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.518312491s Jun 15 12:28:55.603: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.521294638s Jun 15 12:28:57.607: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.525105147s Jun 15 12:28:59.611: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.529036088s Jun 15 12:29:01.697: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.614737543s Jun 15 12:29:03.867: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.785084315s Jun 15 12:29:06.406: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.324323527s Jun 15 12:29:08.410: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.327913789s Jun 15 12:29:10.415: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.332584148s Jun 15 12:29:12.419: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.336826211s Jun 15 12:29:14.735: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.652617405s Jun 15 12:29:16.739: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 28.656524331s Jun 15 12:29:18.742: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.659548705s STEP: Saw pod success Jun 15 12:29:18.742: INFO: Pod "downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:29:18.744: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:29:18.767: INFO: Waiting for pod downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b to disappear Jun 15 12:29:18.771: INFO: Pod downwardapi-volume-c311aa89-af03-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:29:18.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8nfpj" for this suite. Jun 15 12:29:24.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:29:24.788: INFO: namespace: e2e-tests-downward-api-8nfpj, resource: bindings, ignored listing per whitelist Jun 15 12:29:24.837: INFO: namespace e2e-tests-downward-api-8nfpj deletion completed in 6.063820842s • [SLOW TEST:36.889 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:29:24.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 15 12:29:24.946: INFO: Waiting up to 5m0s for pod "pod-d90befb5-af03-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-zkckl" to be "success or failure" Jun 15 12:29:24.962: INFO: Pod "pod-d90befb5-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.226891ms Jun 15 12:29:26.966: INFO: Pod "pod-d90befb5-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019820598s Jun 15 12:29:28.970: INFO: Pod "pod-d90befb5-af03-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.023342646s Jun 15 12:29:30.972: INFO: Pod "pod-d90befb5-af03-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026240813s STEP: Saw pod success Jun 15 12:29:30.972: INFO: Pod "pod-d90befb5-af03-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:29:30.975: INFO: Trying to get logs from node hunter-worker2 pod pod-d90befb5-af03-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:29:31.030: INFO: Waiting for pod pod-d90befb5-af03-11ea-99db-0242ac11001b to disappear Jun 15 12:29:31.041: INFO: Pod pod-d90befb5-af03-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:29:31.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zkckl" for this suite. Jun 15 12:29:37.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:29:37.116: INFO: namespace: e2e-tests-emptydir-zkckl, resource: bindings, ignored listing per whitelist Jun 15 12:29:37.147: INFO: namespace e2e-tests-emptydir-zkckl deletion completed in 6.104364545s • [SLOW TEST:12.311 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:29:37.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:29:44.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-sw5g8" for this suite. Jun 15 12:30:06.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:30:06.408: INFO: namespace: e2e-tests-replication-controller-sw5g8, resource: bindings, ignored listing per whitelist Jun 15 12:30:06.433: INFO: namespace e2e-tests-replication-controller-sw5g8 deletion completed in 22.127327652s • [SLOW TEST:29.285 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:30:06.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Jun 15 12:30:06.558: INFO: Waiting up to 5m0s for pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b" in namespace "e2e-tests-containers-8wv5x" to be "success or failure" Jun 15 12:30:06.585: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.564247ms Jun 15 12:30:08.588: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029821578s Jun 15 12:30:10.593: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034920934s Jun 15 12:30:12.596: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03758489s Jun 15 12:30:14.611: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052573376s Jun 15 12:30:16.820: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262353662s Jun 15 12:30:18.824: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.265728942s STEP: Saw pod success Jun 15 12:30:18.824: INFO: Pod "client-containers-f1d737b9-af03-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:30:18.826: INFO: Trying to get logs from node hunter-worker2 pod client-containers-f1d737b9-af03-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:30:18.888: INFO: Waiting for pod client-containers-f1d737b9-af03-11ea-99db-0242ac11001b to disappear Jun 15 12:30:18.910: INFO: Pod client-containers-f1d737b9-af03-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:30:18.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8wv5x" for this suite. Jun 15 12:30:24.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:30:24.958: INFO: namespace: e2e-tests-containers-8wv5x, resource: bindings, ignored listing per whitelist Jun 15 12:30:25.036: INFO: namespace e2e-tests-containers-8wv5x deletion completed in 6.123045825s • [SLOW TEST:18.603 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:30:25.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jun 15 12:30:25.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b" in namespace "e2e-tests-projected-4ncqd" to be "success or failure" Jun 15 12:30:25.192: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.578849ms Jun 15 12:30:27.277: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10746567s Jun 15 12:30:29.280: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111015394s Jun 15 12:30:31.284: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11502206s Jun 15 12:30:33.419: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249970355s Jun 15 12:30:35.544: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.37537416s Jun 15 12:30:37.548: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.37890557s Jun 15 12:30:39.551: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.382143852s Jun 15 12:30:41.555: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 16.385748955s Jun 15 12:30:43.635: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.465610327s STEP: Saw pod success Jun 15 12:30:43.635: INFO: Pod "downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:30:43.638: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b container client-container: STEP: delete the pod Jun 15 12:30:44.369: INFO: Waiting for pod downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b to disappear Jun 15 12:30:44.618: INFO: Pod downwardapi-volume-fcefe7ce-af03-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:30:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4ncqd" for this suite. Jun 15 12:30:51.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:30:51.580: INFO: namespace: e2e-tests-projected-4ncqd, resource: bindings, ignored listing per whitelist Jun 15 12:30:51.584: INFO: namespace e2e-tests-projected-4ncqd deletion completed in 6.962799665s • [SLOW TEST:26.548 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:30:51.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 12:30:52.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-gt9rk' Jun 15 12:30:54.779: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 15 12:30:54.779: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jun 15 12:30:57.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gt9rk' Jun 15 12:30:57.618: INFO: stderr: "" Jun 15 12:30:57.618: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:30:57.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gt9rk" for this suite. Jun 15 12:31:21.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:31:21.872: INFO: namespace: e2e-tests-kubectl-gt9rk, resource: bindings, ignored listing per whitelist Jun 15 12:31:21.895: INFO: namespace e2e-tests-kubectl-gt9rk deletion completed in 24.199410403s • [SLOW TEST:30.310 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:31:21.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bpqvg Jun 15 12:31:30.057: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bpqvg STEP: checking the pod's current state and verifying that restartCount is present Jun 15 12:31:30.060: INFO: Initial restart count of pod liveness-http is 0 Jun 15 12:32:04.113: INFO: Restart count of pod e2e-tests-container-probe-bpqvg/liveness-http is now 1 (34.052205794s elapsed) Jun 15 12:32:18.136: INFO: Restart count of pod e2e-tests-container-probe-bpqvg/liveness-http is now 2 (48.075678604s elapsed) Jun 15 12:32:36.217: INFO: Restart count of pod e2e-tests-container-probe-bpqvg/liveness-http is now 3 (1m6.156914598s elapsed) Jun 15 12:32:54.424: INFO: Restart count of pod e2e-tests-container-probe-bpqvg/liveness-http is now 4 (1m24.363589005s elapsed) Jun 15 12:33:58.055: INFO: Restart count of pod e2e-tests-container-probe-bpqvg/liveness-http is now 5 (2m27.994309122s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:33:58.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bpqvg" for this suite. Jun 15 12:34:07.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:34:07.919: INFO: namespace: e2e-tests-container-probe-bpqvg, resource: bindings, ignored listing per whitelist Jun 15 12:34:07.946: INFO: namespace e2e-tests-container-probe-bpqvg deletion completed in 9.188400801s • [SLOW TEST:166.052 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:34:07.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-v49c6 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-v49c6 STEP: Deleting pre-stop pod Jun 15 12:34:43.964: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:34:43.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-v49c6" for this suite. Jun 15 12:35:23.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:35:24.031: INFO: namespace: e2e-tests-prestop-v49c6, resource: bindings, ignored listing per whitelist Jun 15 12:35:24.045: INFO: namespace e2e-tests-prestop-v49c6 deletion completed in 40.069386308s • [SLOW TEST:76.099 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:35:24.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0615 12:35:28.790992 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 15 12:35:28.791: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:35:28.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9t5g5" for this suite. Jun 15 12:35:38.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:35:38.944: INFO: namespace: e2e-tests-gc-9t5g5, resource: bindings, ignored listing per whitelist Jun 15 12:35:39.083: INFO: namespace e2e-tests-gc-9t5g5 deletion completed in 10.289076298s • [SLOW TEST:15.037 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:35:39.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 12:35:40.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-t7nkq' Jun 15 12:35:41.162: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 15 12:35:41.162: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 15 12:35:41.203: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 15 12:35:42.599: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 15 12:35:42.989: INFO: scanned /root for discovery docs: Jun 15 12:35:42.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-t7nkq' Jun 15 12:36:14.402: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 15 12:36:14.402: INFO: stdout: "Created e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a\nScaling up e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 15 12:36:14.402: INFO: stdout: "Created e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a\nScaling up e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 15 12:36:14.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-t7nkq' Jun 15 12:36:14.516: INFO: stderr: "" Jun 15 12:36:14.516: INFO: stdout: "e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a-bcr7l " Jun 15 12:36:14.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a-bcr7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7nkq' Jun 15 12:36:14.605: INFO: stderr: "" Jun 15 12:36:14.605: INFO: stdout: "true" Jun 15 12:36:14.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a-bcr7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7nkq' Jun 15 12:36:14.700: INFO: stderr: "" Jun 15 12:36:14.700: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 15 12:36:14.700: INFO: e2e-test-nginx-rc-0344cba75310140984999fa9fa192b8a-bcr7l is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jun 15 12:36:14.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-t7nkq' Jun 15 12:36:14.798: INFO: stderr: "" Jun 15 12:36:14.798: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:36:14.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t7nkq" for this suite. Jun 15 12:36:20.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:36:20.894: INFO: namespace: e2e-tests-kubectl-t7nkq, resource: bindings, ignored listing per whitelist Jun 15 12:36:20.899: INFO: namespace e2e-tests-kubectl-t7nkq deletion completed in 6.079380262s • [SLOW TEST:41.817 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:36:20.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 15 12:36:27.576: INFO: Successfully updated pod "labelsupdated109624b-af04-11ea-99db-0242ac11001b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:36:29.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q54jg" for this suite. Jun 15 12:37:00.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:37:18.163: INFO: namespace: e2e-tests-projected-q54jg, resource: bindings, ignored listing per whitelist Jun 15 12:37:23.542: INFO: namespace e2e-tests-projected-q54jg deletion completed in 53.918181574s • [SLOW TEST:62.643 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:37:23.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-f8014141-af04-11ea-99db-0242ac11001b STEP: Creating configMap with name cm-test-opt-upd-f801419b-af04-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f8014141-af04-11ea-99db-0242ac11001b STEP: Updating configmap cm-test-opt-upd-f801419b-af04-11ea-99db-0242ac11001b STEP: Creating configMap with name cm-test-opt-create-f801420c-af04-11ea-99db-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:39:31.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-glhgm" for this suite. Jun 15 12:40:09.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:40:09.911: INFO: namespace: e2e-tests-projected-glhgm, resource: bindings, ignored listing per whitelist Jun 15 12:40:09.937: INFO: namespace e2e-tests-projected-glhgm deletion completed in 38.153930703s • [SLOW TEST:166.394 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:40:09.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:40:10.105: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 15 12:40:10.165: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 15 12:40:15.170: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 15 12:40:35.177: INFO: Creating deployment "test-rolling-update-deployment" Jun 15 12:40:35.180: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 15 12:40:35.194: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 15 12:40:37.288: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 15 12:40:37.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 12:40:39.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 12:40:41.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 12:40:43.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 12:40:45.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63727821635, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 15 12:40:47.687: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jun 15 12:40:48.392: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-cmw9h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cmw9h/deployments/test-rolling-update-deployment,UID:6889ced3-af05-11ea-99e8-0242ac110002,ResourceVersion:16083620,Generation:1,CreationTimestamp:2020-06-15 12:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-15 12:40:35 +0000 UTC 2020-06-15 12:40:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-15 12:40:45 +0000 UTC 2020-06-15 12:40:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 15 12:40:48.399: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-cmw9h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cmw9h/replicasets/test-rolling-update-deployment-75db98fb4c,UID:688d2415-af05-11ea-99e8-0242ac110002,ResourceVersion:16083609,Generation:1,CreationTimestamp:2020-06-15 12:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6889ced3-af05-11ea-99e8-0242ac110002 0xc001e298f7 0xc001e298f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 15 12:40:48.399: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 15 12:40:48.400: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-cmw9h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cmw9h/replicasets/test-rolling-update-controller,UID:59982bd5-af05-11ea-99e8-0242ac110002,ResourceVersion:16083618,Generation:2,CreationTimestamp:2020-06-15 12:40:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6889ced3-af05-11ea-99e8-0242ac110002 0xc001e29497 0xc001e29498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 15 12:40:48.472: INFO: Pod "test-rolling-update-deployment-75db98fb4c-h2bxb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-h2bxb,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-cmw9h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cmw9h/pods/test-rolling-update-deployment-75db98fb4c-h2bxb,UID:689b1ec0-af05-11ea-99e8-0242ac110002,ResourceVersion:16083608,Generation:0,CreationTimestamp:2020-06-15 12:40:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 688d2415-af05-11ea-99e8-0242ac110002 0xc001c86d07 0xc001c86d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pmb9w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pmb9w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pmb9w true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c86db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c86dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:40:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:40:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:40:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 12:40:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.65,StartTime:2020-06-15 12:40:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-15 12:40:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b734511d8fe4c342dbc12091e231d7acc7ae185ec72bef3375c9247afe763e8a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:40:48.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cmw9h" for this suite. Jun 15 12:41:03.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:41:03.370: INFO: namespace: e2e-tests-deployment-cmw9h, resource: bindings, ignored listing per whitelist Jun 15 12:41:03.370: INFO: namespace e2e-tests-deployment-cmw9h deletion completed in 14.336833732s • [SLOW TEST:53.433 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:41:03.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-w6nm STEP: Creating a pod to test atomic-volume-subpath Jun 15 12:41:03.838: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w6nm" in namespace "e2e-tests-subpath-v5kkr" to be "success or failure" Jun 15 12:41:03.862: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 24.068436ms Jun 15 12:41:06.344: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506005731s Jun 15 12:41:08.347: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508640733s Jun 15 12:41:10.351: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51245024s Jun 15 12:41:12.356: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517758872s Jun 15 12:41:14.371: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.53208378s Jun 15 12:41:16.809: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.970518277s Jun 15 12:41:18.986: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=true. Elapsed: 15.147662553s Jun 15 12:41:20.989: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 17.150740332s Jun 15 12:41:22.992: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 19.153772495s Jun 15 12:41:24.996: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 21.15713653s Jun 15 12:41:26.999: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 23.160402844s Jun 15 12:41:29.003: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 25.16410934s Jun 15 12:41:31.005: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 27.167037408s Jun 15 12:41:33.008: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 29.169674583s Jun 15 12:41:35.011: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 31.172754702s Jun 15 12:41:37.099: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Running", Reason="", readiness=false. Elapsed: 33.261076631s Jun 15 12:41:39.103: INFO: Pod "pod-subpath-test-configmap-w6nm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.264389534s STEP: Saw pod success Jun 15 12:41:39.103: INFO: Pod "pod-subpath-test-configmap-w6nm" satisfied condition "success or failure" Jun 15 12:41:39.105: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-w6nm container test-container-subpath-configmap-w6nm: STEP: delete the pod Jun 15 12:41:39.387: INFO: Waiting for pod pod-subpath-test-configmap-w6nm to disappear Jun 15 12:41:40.009: INFO: Pod pod-subpath-test-configmap-w6nm no longer exists STEP: Deleting pod pod-subpath-test-configmap-w6nm Jun 15 12:41:40.009: INFO: Deleting pod "pod-subpath-test-configmap-w6nm" in namespace "e2e-tests-subpath-v5kkr" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:41:40.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-v5kkr" for this suite. Jun 15 12:41:46.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:41:46.982: INFO: namespace: e2e-tests-subpath-v5kkr, resource: bindings, ignored listing per whitelist Jun 15 12:41:47.004: INFO: namespace e2e-tests-subpath-v5kkr deletion completed in 6.989828067s • [SLOW TEST:43.634 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:41:47.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-n2x7 STEP: Creating a pod to test atomic-volume-subpath Jun 15 12:41:47.128: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n2x7" in namespace "e2e-tests-subpath-mrhbp" to be "success or failure" Jun 15 12:41:47.182: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.854628ms Jun 15 12:41:49.186: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057618195s Jun 15 12:41:51.189: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060911253s Jun 15 12:41:53.191: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063360253s Jun 15 12:41:55.297: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169520888s Jun 15 12:41:57.301: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 10.172634379s Jun 15 12:41:59.303: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 12.175419783s Jun 15 12:42:01.307: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 14.179043368s Jun 15 12:42:03.310: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 16.181884659s Jun 15 12:42:05.313: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 18.185218055s Jun 15 12:42:07.317: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 20.189009454s Jun 15 12:42:09.321: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 22.193301568s Jun 15 12:42:11.324: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 24.196418118s Jun 15 12:42:13.327: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 26.198896547s Jun 15 12:42:15.495: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Running", Reason="", readiness=false. Elapsed: 28.367264456s Jun 15 12:42:17.498: INFO: Pod "pod-subpath-test-secret-n2x7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.370541839s STEP: Saw pod success Jun 15 12:42:17.499: INFO: Pod "pod-subpath-test-secret-n2x7" satisfied condition "success or failure" Jun 15 12:42:17.501: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-n2x7 container test-container-subpath-secret-n2x7: STEP: delete the pod Jun 15 12:42:17.542: INFO: Waiting for pod pod-subpath-test-secret-n2x7 to disappear Jun 15 12:42:17.630: INFO: Pod pod-subpath-test-secret-n2x7 no longer exists STEP: Deleting pod pod-subpath-test-secret-n2x7 Jun 15 12:42:17.630: INFO: Deleting pod "pod-subpath-test-secret-n2x7" in namespace "e2e-tests-subpath-mrhbp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:42:17.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mrhbp" for this suite. Jun 15 12:42:23.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:42:23.689: INFO: namespace: e2e-tests-subpath-mrhbp, resource: bindings, ignored listing per whitelist Jun 15 12:42:23.785: INFO: namespace e2e-tests-subpath-mrhbp deletion completed in 6.150038259s • [SLOW TEST:36.781 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:42:23.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jun 15 12:42:23.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w872t' Jun 15 12:42:27.131: INFO: stderr: "" Jun 15 12:42:27.131: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 15 12:42:28.134: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:28.134: INFO: Found 0 / 1 Jun 15 12:42:29.135: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:29.135: INFO: Found 0 / 1 Jun 15 12:42:30.479: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:30.480: INFO: Found 0 / 1 Jun 15 12:42:31.135: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:31.135: INFO: Found 0 / 1 Jun 15 12:42:32.135: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:32.135: INFO: Found 0 / 1 Jun 15 12:42:33.136: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:33.136: INFO: Found 0 / 1 Jun 15 12:42:34.135: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:34.135: INFO: Found 0 / 1 Jun 15 12:42:35.202: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:35.202: INFO: Found 1 / 1 Jun 15 12:42:35.202: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 15 12:42:35.205: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:35.205: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 15 12:42:35.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vpbdl --namespace=e2e-tests-kubectl-w872t -p {"metadata":{"annotations":{"x":"y"}}}' Jun 15 12:42:35.299: INFO: stderr: "" Jun 15 12:42:35.299: INFO: stdout: "pod/redis-master-vpbdl patched\n" STEP: checking annotations Jun 15 12:42:35.669: INFO: Selector matched 1 pods for map[app:redis] Jun 15 12:42:35.669: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:42:35.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w872t" for this suite. Jun 15 12:42:57.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:42:57.728: INFO: namespace: e2e-tests-kubectl-w872t, resource: bindings, ignored listing per whitelist Jun 15 12:42:57.756: INFO: namespace e2e-tests-kubectl-w872t deletion completed in 22.085118317s • [SLOW TEST:33.971 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:42:57.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:42:57.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 15 12:42:58.039: INFO: stderr: "" Jun 15 12:42:58.039: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:07:46Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:42:58.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-fw68h" for this suite. Jun 15 12:43:04.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:43:04.105: INFO: namespace: e2e-tests-kubectl-fw68h, resource: bindings, ignored listing per whitelist Jun 15 12:43:04.123: INFO: namespace e2e-tests-kubectl-fw68h deletion completed in 6.079580166s • [SLOW TEST:6.366 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:43:04.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c163c3b4-af05-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:43:04.286: INFO: Waiting up to 5m0s for pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-fc8ws" to be "success or failure" Jun 15 12:43:04.295: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502386ms Jun 15 12:43:06.313: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027193767s Jun 15 12:43:08.750: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464008788s Jun 15 12:43:10.754: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46794558s Jun 15 12:43:13.245: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.958689585s Jun 15 12:43:16.641: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.354320173s Jun 15 12:43:22.329: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.043258413s Jun 15 12:43:24.773: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.487149341s Jun 15 12:43:31.186: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.899563763s Jun 15 12:43:34.726: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.439934286s Jun 15 12:43:38.021: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.734885969s Jun 15 12:43:40.804: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.518088744s Jun 15 12:43:44.043: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.757042404s Jun 15 12:43:46.047: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.760797949s Jun 15 12:43:48.051: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 43.765232511s Jun 15 12:43:51.788: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.502038882s Jun 15 12:43:54.266: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.979930033s Jun 15 12:43:56.347: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 52.060804422s Jun 15 12:43:58.351: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 54.064306799s STEP: Saw pod success Jun 15 12:43:58.351: INFO: Pod "pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:43:58.353: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 12:43:58.447: INFO: Waiting for pod pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b to disappear Jun 15 12:43:58.463: INFO: Pod pod-configmaps-c165c0ac-af05-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:43:58.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fc8ws" for this suite. Jun 15 12:44:04.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:44:04.663: INFO: namespace: e2e-tests-configmap-fc8ws, resource: bindings, ignored listing per whitelist Jun 15 12:44:04.681: INFO: namespace e2e-tests-configmap-fc8ws deletion completed in 6.214559854s • [SLOW TEST:60.558 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:44:04.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Jun 15 12:44:05.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 15 12:44:05.119: INFO: Waiting for terminating namespaces to be deleted... Jun 15 12:44:05.121: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Jun 15 12:44:05.125: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) Jun 15 12:44:05.125: INFO: Container kube-proxy ready: true, restart count 0 Jun 15 12:44:05.125: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 12:44:05.125: INFO: Container kindnet-cni ready: true, restart count 0 Jun 15 12:44:05.125: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 15 12:44:05.125: INFO: Container coredns ready: true, restart count 0 Jun 15 12:44:05.125: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Jun 15 12:44:05.130: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 12:44:05.130: INFO: Container kindnet-cni ready: true, restart count 0 Jun 15 12:44:05.130: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) Jun 15 12:44:05.130: INFO: Container coredns ready: true, restart count 0 Jun 15 12:44:05.130: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) Jun 15 12:44:05.130: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ea7cc1e2-af05-11ea-99db-0242ac11001b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ea7cc1e2-af05-11ea-99db-0242ac11001b off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ea7cc1e2-af05-11ea-99db-0242ac11001b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:44:17.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-p2nxd" for this suite. Jun 15 12:44:27.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:44:27.306: INFO: namespace: e2e-tests-sched-pred-p2nxd, resource: bindings, ignored listing per whitelist Jun 15 12:44:27.336: INFO: namespace e2e-tests-sched-pred-p2nxd deletion completed in 10.069863178s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:22.654 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:44:27.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jun 15 12:44:33.990: INFO: Successfully updated pod "annotationupdatef2fc708a-af05-11ea-99db-0242ac11001b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:44:36.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r2xx2" for this suite. Jun 15 12:44:58.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:44:58.064: INFO: namespace: e2e-tests-projected-r2xx2, resource: bindings, ignored listing per whitelist Jun 15 12:44:58.089: INFO: namespace e2e-tests-projected-r2xx2 deletion completed in 22.078307959s • [SLOW TEST:30.753 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:44:58.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-054d9faf-af06-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:44:58.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-x5fdb" to be "success or failure" Jun 15 12:44:58.244: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.261828ms Jun 15 12:45:00.248: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0244963s Jun 15 12:45:02.252: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027982717s Jun 15 12:45:04.255: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031585339s Jun 15 12:45:06.259: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034946302s Jun 15 12:45:08.262: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.038682605s Jun 15 12:45:10.265: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.041718024s Jun 15 12:45:12.269: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.0451982s Jun 15 12:45:14.612: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 16.388120394s Jun 15 12:45:16.616: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 18.39190853s Jun 15 12:45:18.714: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.489874605s STEP: Saw pod success Jun 15 12:45:18.714: INFO: Pod "pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:45:18.754: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 12:45:20.146: INFO: Waiting for pod pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:45:20.324: INFO: Pod pod-configmaps-054e4405-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:45:20.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-x5fdb" for this suite. Jun 15 12:45:26.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:45:26.679: INFO: namespace: e2e-tests-configmap-x5fdb, resource: bindings, ignored listing per whitelist Jun 15 12:45:26.704: INFO: namespace e2e-tests-configmap-x5fdb deletion completed in 6.376517878s • [SLOW TEST:28.614 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:45:26.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-1659402f-af06-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:45:26.883: INFO: Waiting up to 5m0s for pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-rmzss" to be "success or failure" Jun 15 12:45:26.958: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 75.109444ms Jun 15 12:45:29.163: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280590611s Jun 15 12:45:31.167: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284352551s Jun 15 12:45:35.083: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200639754s Jun 15 12:45:37.830: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.946847911s Jun 15 12:45:40.595: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.71193086s Jun 15 12:45:42.799: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.915842865s Jun 15 12:45:45.482: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.599027772s Jun 15 12:45:47.484: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.601279031s Jun 15 12:45:49.487: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 22.604250375s Jun 15 12:45:51.491: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.608505429s STEP: Saw pod success Jun 15 12:45:51.491: INFO: Pod "pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:45:51.495: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 12:45:51.743: INFO: Waiting for pod pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:45:51.796: INFO: Pod pod-configmaps-165c229c-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:45:51.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rmzss" for this suite. Jun 15 12:46:02.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:46:02.232: INFO: namespace: e2e-tests-configmap-rmzss, resource: bindings, ignored listing per whitelist Jun 15 12:46:02.266: INFO: namespace e2e-tests-configmap-rmzss deletion completed in 10.465973725s • [SLOW TEST:35.562 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:46:02.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:46:14.771: INFO: Waiting up to 5m0s for pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-pods-44fbf" to be "success or failure" Jun 15 12:46:15.038: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 267.019836ms Jun 15 12:46:17.041: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270072803s Jun 15 12:46:19.122: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350360663s Jun 15 12:46:21.829: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.057715733s Jun 15 12:46:23.832: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.060939975s Jun 15 12:46:26.176: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 11.404624461s Jun 15 12:46:28.187: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.416121368s STEP: Saw pod success Jun 15 12:46:28.187: INFO: Pod "client-envvars-32f33e63-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:46:28.189: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-32f33e63-af06-11ea-99db-0242ac11001b container env3cont: STEP: delete the pod Jun 15 12:46:28.381: INFO: Waiting for pod client-envvars-32f33e63-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:46:28.396: INFO: Pod client-envvars-32f33e63-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:46:28.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-44fbf" for this suite. Jun 15 12:47:14.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:47:14.539: INFO: namespace: e2e-tests-pods-44fbf, resource: bindings, ignored listing per whitelist Jun 15 12:47:14.547: INFO: namespace e2e-tests-pods-44fbf deletion completed in 46.14825754s • [SLOW TEST:72.281 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:47:14.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 15 12:47:14.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-h5464' Jun 15 12:47:14.796: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 15 12:47:14.796: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 15 12:47:14.805: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-v7xv2] Jun 15 12:47:14.805: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-v7xv2" in namespace "e2e-tests-kubectl-h5464" to be "running and ready" Jun 15 12:47:14.838: INFO: Pod "e2e-test-nginx-rc-v7xv2": Phase="Pending", Reason="", readiness=false. Elapsed: 33.366435ms Jun 15 12:47:16.841: INFO: Pod "e2e-test-nginx-rc-v7xv2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036050887s Jun 15 12:47:18.984: INFO: Pod "e2e-test-nginx-rc-v7xv2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179265223s Jun 15 12:47:20.988: INFO: Pod "e2e-test-nginx-rc-v7xv2": Phase="Running", Reason="", readiness=true. Elapsed: 6.183122119s Jun 15 12:47:20.988: INFO: Pod "e2e-test-nginx-rc-v7xv2" satisfied condition "running and ready" Jun 15 12:47:20.988: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-v7xv2] Jun 15 12:47:20.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-h5464' Jun 15 12:47:21.120: INFO: stderr: "" Jun 15 12:47:21.120: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jun 15 12:47:21.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-h5464' Jun 15 12:47:21.245: INFO: stderr: "" Jun 15 12:47:21.245: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:47:21.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h5464" for this suite. Jun 15 12:47:27.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:47:27.314: INFO: namespace: e2e-tests-kubectl-h5464, resource: bindings, ignored listing per whitelist Jun 15 12:47:27.332: INFO: namespace e2e-tests-kubectl-h5464 deletion completed in 6.084715758s • [SLOW TEST:12.785 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:47:27.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 15 12:47:27.454: INFO: Waiting up to 5m0s for pod "pod-5e41636e-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-k5jgc" to be "success or failure" Jun 15 12:47:27.506: INFO: Pod "pod-5e41636e-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.443979ms Jun 15 12:47:30.710: INFO: Pod "pod-5e41636e-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.25556554s Jun 15 12:47:32.713: INFO: Pod "pod-5e41636e-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.259286319s Jun 15 12:47:34.758: INFO: Pod "pod-5e41636e-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.303725324s Jun 15 12:47:36.761: INFO: Pod "pod-5e41636e-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.307197002s STEP: Saw pod success Jun 15 12:47:36.761: INFO: Pod "pod-5e41636e-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:47:36.763: INFO: Trying to get logs from node hunter-worker pod pod-5e41636e-af06-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:47:37.010: INFO: Waiting for pod pod-5e41636e-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:47:37.086: INFO: Pod pod-5e41636e-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:47:37.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-k5jgc" for this suite. Jun 15 12:47:47.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:47:47.268: INFO: namespace: e2e-tests-emptydir-k5jgc, resource: bindings, ignored listing per whitelist Jun 15 12:47:47.291: INFO: namespace e2e-tests-emptydir-k5jgc deletion completed in 10.201857444s • [SLOW TEST:19.959 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:47:47.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6a298649-af06-11ea-99db-0242ac11001b STEP: Creating a pod to test consume configMaps Jun 15 12:47:47.411: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-configmap-kjm9h" to be "success or failure" Jun 15 12:47:47.423: INFO: Pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.653246ms Jun 15 12:47:49.428: INFO: Pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017366457s Jun 15 12:47:51.440: INFO: Pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029600086s Jun 15 12:47:53.443: INFO: Pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.032320462s Jun 15 12:47:55.445: INFO: Pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034558756s STEP: Saw pod success Jun 15 12:47:55.445: INFO: Pod "pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:47:55.451: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b container configmap-volume-test: STEP: delete the pod Jun 15 12:47:55.487: INFO: Waiting for pod pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:47:55.530: INFO: Pod pod-configmaps-6a2a5096-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:47:55.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kjm9h" for this suite. Jun 15 12:48:01.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:48:01.566: INFO: namespace: e2e-tests-configmap-kjm9h, resource: bindings, ignored listing per whitelist Jun 15 12:48:01.608: INFO: namespace e2e-tests-configmap-kjm9h deletion completed in 6.075466546s • [SLOW TEST:14.317 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:48:01.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 15 12:48:01.745: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 15 12:48:07.470: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:48:08.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-qzx95" for this suite. Jun 15 12:48:25.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:48:25.104: INFO: namespace: e2e-tests-replication-controller-qzx95, resource: bindings, ignored listing per whitelist Jun 15 12:48:25.114: INFO: namespace e2e-tests-replication-controller-qzx95 deletion completed in 16.388130702s • [SLOW TEST:23.504 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:48:25.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-80c5f107-af06-11ea-99db-0242ac11001b STEP: Creating a pod to test consume secrets Jun 15 12:48:25.430: INFO: Waiting up to 5m0s for pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-secrets-ftcr8" to be "success or failure" Jun 15 12:48:25.495: INFO: Pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 64.68831ms Jun 15 12:48:27.499: INFO: Pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068824814s Jun 15 12:48:29.682: INFO: Pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251727561s Jun 15 12:48:31.686: INFO: Pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.255925543s Jun 15 12:48:33.710: INFO: Pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.280475386s STEP: Saw pod success Jun 15 12:48:33.710: INFO: Pod "pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:48:33.712: INFO: Trying to get logs from node hunter-worker pod pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b container secret-volume-test: STEP: delete the pod Jun 15 12:48:34.008: INFO: Waiting for pod pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:48:35.773: INFO: Pod pod-secrets-80d1c883-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:48:35.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ftcr8" for this suite. Jun 15 12:48:42.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:48:42.761: INFO: namespace: e2e-tests-secrets-ftcr8, resource: bindings, ignored listing per whitelist Jun 15 12:48:42.789: INFO: namespace e2e-tests-secrets-ftcr8 deletion completed in 7.009861748s • [SLOW TEST:17.676 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:48:42.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:49:06.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pnf7x" for this suite. Jun 15 12:49:14.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:49:14.570: INFO: namespace: e2e-tests-kubelet-test-pnf7x, resource: bindings, ignored listing per whitelist Jun 15 12:49:14.597: INFO: namespace e2e-tests-kubelet-test-pnf7x deletion completed in 8.214803636s • [SLOW TEST:31.807 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:49:14.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vgrfr Jun 15 12:49:25.105: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vgrfr STEP: checking the pod's current state and verifying that restartCount is present Jun 15 12:49:25.108: INFO: Initial restart count of pod liveness-http is 0 Jun 15 12:49:45.684: INFO: Restart count of pod e2e-tests-container-probe-vgrfr/liveness-http is now 1 (20.57604085s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:49:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vgrfr" for this suite. Jun 15 12:49:51.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:49:51.744: INFO: namespace: e2e-tests-container-probe-vgrfr, resource: bindings, ignored listing per whitelist Jun 15 12:49:51.798: INFO: namespace e2e-tests-container-probe-vgrfr deletion completed in 6.06892134s • [SLOW TEST:37.201 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:49:51.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jun 15 12:49:51.983: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-jc7dn" to be "success or failure" Jun 15 12:49:51.998: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.886065ms Jun 15 12:49:54.587: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.603179867s Jun 15 12:49:56.814: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.830702284s Jun 15 12:49:58.819: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.835453201s Jun 15 12:50:00.822: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.838448295s STEP: Saw pod success Jun 15 12:50:00.822: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 15 12:50:00.824: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 15 12:50:00.984: INFO: Waiting for pod pod-host-path-test to disappear Jun 15 12:50:01.048: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:50:01.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-jc7dn" for this suite. Jun 15 12:50:07.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:50:07.099: INFO: namespace: e2e-tests-hostpath-jc7dn, resource: bindings, ignored listing per whitelist Jun 15 12:50:07.198: INFO: namespace e2e-tests-hostpath-jc7dn deletion completed in 6.145386174s • [SLOW TEST:15.400 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:50:07.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-bd8f6ad1-af06-11ea-99db-0242ac11001b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:50:13.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-2n2km" for this suite. Jun 15 12:50:35.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:50:35.546: INFO: namespace: e2e-tests-configmap-2n2km, resource: bindings, ignored listing per whitelist Jun 15 12:50:35.639: INFO: namespace e2e-tests-configmap-2n2km deletion completed in 22.140322965s • [SLOW TEST:28.441 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:50:35.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0615 12:51:06.272935 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 15 12:51:06.272: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:51:06.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-pv46l" for this suite. Jun 15 12:51:14.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:51:14.350: INFO: namespace: e2e-tests-gc-pv46l, resource: bindings, ignored listing per whitelist Jun 15 12:51:14.366: INFO: namespace e2e-tests-gc-pv46l deletion completed in 8.090224119s • [SLOW TEST:38.727 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:51:14.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jun 15 12:51:14.453: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:51:15.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-sthkf" for this suite. Jun 15 12:51:21.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:51:21.641: INFO: namespace: e2e-tests-custom-resource-definition-sthkf, resource: bindings, ignored listing per whitelist Jun 15 12:51:21.646: INFO: namespace e2e-tests-custom-resource-definition-sthkf deletion completed in 6.132858639s • [SLOW TEST:7.280 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:51:21.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jun 15 12:51:21.800: INFO: Waiting up to 5m0s for pod "var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b" in namespace "e2e-tests-var-expansion-g5psv" to be "success or failure" Jun 15 12:51:21.803: INFO: Pod "var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.96337ms Jun 15 12:51:23.807: INFO: Pod "var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007151591s Jun 15 12:51:25.811: INFO: Pod "var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011723374s Jun 15 12:51:27.815: INFO: Pod "var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015504043s STEP: Saw pod success Jun 15 12:51:27.815: INFO: Pod "var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:51:27.818: INFO: Trying to get logs from node hunter-worker pod var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 12:51:27.840: INFO: Waiting for pod var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b to disappear Jun 15 12:51:27.847: INFO: Pod var-expansion-e9efbd03-af06-11ea-99db-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:51:27.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-g5psv" for this suite. Jun 15 12:51:33.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:51:33.877: INFO: namespace: e2e-tests-var-expansion-g5psv, resource: bindings, ignored listing per whitelist Jun 15 12:51:33.939: INFO: namespace e2e-tests-var-expansion-g5psv deletion completed in 6.087811914s • [SLOW TEST:12.292 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:51:33.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9n6lh in namespace e2e-tests-proxy-4qfn8 I0615 12:51:34.258596 6 runners.go:184] Created replication controller with name: proxy-service-9n6lh, namespace: e2e-tests-proxy-4qfn8, replica count: 1 I0615 12:51:35.309088 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0615 12:51:36.309445 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0615 12:51:37.309797 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0615 12:51:38.310056 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:39.310256 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:40.310488 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:41.310744 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:42.310996 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:43.311273 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:44.311533 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:45.311795 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:46.311991 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:47.312264 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0615 12:51:48.312551 6 runners.go:184] proxy-service-9n6lh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 15 12:51:48.316: INFO: setup took 14.173033888s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 15 12:51:48.323: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-4qfn8/pods/http:proxy-service-9n6lh-wb6vm:160/proxy/: foo (200; 6.579014ms) Jun 15 12:51:48.325: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-4qfn8/pods/proxy-service-9n6lh-wb6vm:160/proxy/: foo (200; 8.630616ms) Jun 15 12:51:48.325: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-4qfn8/pods/proxy-service-9n6lh-wb6vm:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:52:11.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-l6rh6" for this suite. Jun 15 12:52:53.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:52:53.886: INFO: namespace: e2e-tests-kubelet-test-l6rh6, resource: bindings, ignored listing per whitelist Jun 15 12:52:53.910: INFO: namespace e2e-tests-kubelet-test-l6rh6 deletion completed in 42.112655169s • [SLOW TEST:46.471 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:52:53.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jun 15 12:52:54.054: INFO: Waiting up to 5m0s for pod "downward-api-20eb4991-af07-11ea-99db-0242ac11001b" in namespace "e2e-tests-downward-api-5gnv6" to be "success or failure" Jun 15 12:52:54.078: INFO: Pod "downward-api-20eb4991-af07-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.043663ms Jun 15 12:52:56.081: INFO: Pod "downward-api-20eb4991-af07-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02748644s Jun 15 12:52:58.086: INFO: Pod "downward-api-20eb4991-af07-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.031686458s Jun 15 12:53:00.090: INFO: Pod "downward-api-20eb4991-af07-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036466822s STEP: Saw pod success Jun 15 12:53:00.090: INFO: Pod "downward-api-20eb4991-af07-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:53:00.094: INFO: Trying to get logs from node hunter-worker pod downward-api-20eb4991-af07-11ea-99db-0242ac11001b container dapi-container: STEP: delete the pod Jun 15 12:53:00.139: INFO: Waiting for pod downward-api-20eb4991-af07-11ea-99db-0242ac11001b to disappear Jun 15 12:53:00.147: INFO: Pod downward-api-20eb4991-af07-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:53:00.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5gnv6" for this suite. Jun 15 12:53:06.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:53:06.219: INFO: namespace: e2e-tests-downward-api-5gnv6, resource: bindings, ignored listing per whitelist Jun 15 12:53:06.273: INFO: namespace e2e-tests-downward-api-5gnv6 deletion completed in 6.123028909s • [SLOW TEST:12.362 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:53:06.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2pgdg [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jun 15 12:53:06.452: INFO: Found 0 stateful pods, waiting for 3 Jun 15 12:53:16.462: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:53:16.462: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:53:16.462: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Jun 15 12:53:26.458: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:53:26.458: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:53:26.458: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 15 12:53:26.485: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 15 12:53:36.527: INFO: Updating stateful set ss2 Jun 15 12:53:36.540: INFO: Waiting for Pod e2e-tests-statefulset-2pgdg/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 15 12:53:46.682: INFO: Found 2 stateful pods, waiting for 3 Jun 15 12:53:56.688: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:53:56.688: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:53:56.688: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 15 12:53:56.722: INFO: Updating stateful set ss2 Jun 15 12:53:56.732: INFO: Waiting for Pod e2e-tests-statefulset-2pgdg/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 15 12:54:06.764: INFO: Updating stateful set ss2 Jun 15 12:54:06.906: INFO: Waiting for StatefulSet e2e-tests-statefulset-2pgdg/ss2 to complete update Jun 15 12:54:06.906: INFO: Waiting for Pod e2e-tests-statefulset-2pgdg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 15 12:54:17.068: INFO: Waiting for StatefulSet e2e-tests-statefulset-2pgdg/ss2 to complete update Jun 15 12:54:17.068: INFO: Waiting for Pod e2e-tests-statefulset-2pgdg/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 15 12:54:27.784: INFO: Waiting for StatefulSet e2e-tests-statefulset-2pgdg/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 15 12:54:36.918: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2pgdg Jun 15 12:54:36.922: INFO: Scaling statefulset ss2 to 0 Jun 15 12:55:16.961: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 12:55:16.963: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:55:16.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2pgdg" for this suite. Jun 15 12:55:25.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:55:25.086: INFO: namespace: e2e-tests-statefulset-2pgdg, resource: bindings, ignored listing per whitelist Jun 15 12:55:25.097: INFO: namespace e2e-tests-statefulset-2pgdg deletion completed in 8.104165352s • [SLOW TEST:138.824 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:55:25.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qsrs4 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jun 15 12:55:25.224: INFO: Found 0 stateful pods, waiting for 3 Jun 15 12:55:35.232: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:55:35.232: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:55:35.232: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 15 12:55:45.229: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:55:45.229: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:55:45.229: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 15 12:55:45.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qsrs4 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 12:55:45.599: INFO: stderr: "I0615 12:55:45.356931 3235 log.go:172] (0xc00014c580) (0xc0007345a0) Create stream\nI0615 12:55:45.356995 3235 log.go:172] (0xc00014c580) (0xc0007345a0) Stream added, broadcasting: 1\nI0615 12:55:45.359512 3235 log.go:172] (0xc00014c580) Reply frame received for 1\nI0615 12:55:45.359556 3235 log.go:172] (0xc00014c580) (0xc000698dc0) Create stream\nI0615 12:55:45.359568 3235 log.go:172] (0xc00014c580) (0xc000698dc0) Stream added, broadcasting: 3\nI0615 12:55:45.360754 3235 log.go:172] (0xc00014c580) Reply frame received for 3\nI0615 12:55:45.360808 3235 log.go:172] (0xc00014c580) (0xc00036e000) Create stream\nI0615 12:55:45.360820 3235 log.go:172] (0xc00014c580) (0xc00036e000) Stream added, broadcasting: 5\nI0615 12:55:45.361985 3235 log.go:172] (0xc00014c580) Reply frame received for 5\nI0615 12:55:45.590393 3235 log.go:172] (0xc00014c580) Data frame received for 3\nI0615 12:55:45.590437 3235 log.go:172] (0xc000698dc0) (3) Data frame handling\nI0615 12:55:45.590459 3235 log.go:172] (0xc000698dc0) (3) Data frame sent\nI0615 12:55:45.591105 3235 log.go:172] (0xc00014c580) Data frame received for 3\nI0615 12:55:45.591137 3235 log.go:172] (0xc000698dc0) (3) Data frame handling\nI0615 12:55:45.591648 3235 log.go:172] (0xc00014c580) Data frame received for 5\nI0615 12:55:45.591673 3235 log.go:172] (0xc00036e000) (5) Data frame handling\nI0615 12:55:45.593869 3235 log.go:172] (0xc00014c580) Data frame received for 1\nI0615 12:55:45.593886 3235 log.go:172] (0xc0007345a0) (1) Data frame handling\nI0615 12:55:45.593893 3235 log.go:172] (0xc0007345a0) (1) Data frame sent\nI0615 12:55:45.593902 3235 log.go:172] (0xc00014c580) (0xc0007345a0) Stream removed, broadcasting: 1\nI0615 12:55:45.593956 3235 log.go:172] (0xc00014c580) Go away received\nI0615 12:55:45.594050 3235 log.go:172] (0xc00014c580) (0xc0007345a0) Stream removed, broadcasting: 1\nI0615 12:55:45.594061 3235 log.go:172] (0xc00014c580) (0xc000698dc0) Stream removed, broadcasting: 3\nI0615 12:55:45.594070 3235 log.go:172] (0xc00014c580) (0xc00036e000) Stream removed, broadcasting: 5\n" Jun 15 12:55:45.599: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 12:55:45.599: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 15 12:55:55.660: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 15 12:56:05.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qsrs4 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 12:56:05.884: INFO: stderr: "I0615 12:56:05.810356 3257 log.go:172] (0xc0003184d0) (0xc000752640) Create stream\nI0615 12:56:05.810398 3257 log.go:172] (0xc0003184d0) (0xc000752640) Stream added, broadcasting: 1\nI0615 12:56:05.812284 3257 log.go:172] (0xc0003184d0) Reply frame received for 1\nI0615 12:56:05.812328 3257 log.go:172] (0xc0003184d0) (0xc0007526e0) Create stream\nI0615 12:56:05.812340 3257 log.go:172] (0xc0003184d0) (0xc0007526e0) Stream added, broadcasting: 3\nI0615 12:56:05.813459 3257 log.go:172] (0xc0003184d0) Reply frame received for 3\nI0615 12:56:05.813496 3257 log.go:172] (0xc0003184d0) (0xc0005c8e60) Create stream\nI0615 12:56:05.813510 3257 log.go:172] (0xc0003184d0) (0xc0005c8e60) Stream added, broadcasting: 5\nI0615 12:56:05.814262 3257 log.go:172] (0xc0003184d0) Reply frame received for 5\nI0615 12:56:05.876676 3257 log.go:172] (0xc0003184d0) Data frame received for 3\nI0615 12:56:05.876707 3257 log.go:172] (0xc0007526e0) (3) Data frame handling\nI0615 12:56:05.876722 3257 log.go:172] (0xc0007526e0) (3) Data frame sent\nI0615 12:56:05.876781 3257 log.go:172] (0xc0003184d0) Data frame received for 3\nI0615 12:56:05.876792 3257 log.go:172] (0xc0007526e0) (3) Data frame handling\nI0615 12:56:05.876835 3257 log.go:172] (0xc0003184d0) Data frame received for 5\nI0615 12:56:05.876861 3257 log.go:172] (0xc0005c8e60) (5) Data frame handling\nI0615 12:56:05.878068 3257 log.go:172] (0xc0003184d0) Data frame received for 1\nI0615 12:56:05.878093 3257 log.go:172] (0xc000752640) (1) Data frame handling\nI0615 12:56:05.878114 3257 log.go:172] (0xc000752640) (1) Data frame sent\nI0615 12:56:05.878129 3257 log.go:172] (0xc0003184d0) (0xc000752640) Stream removed, broadcasting: 1\nI0615 12:56:05.878146 3257 log.go:172] (0xc0003184d0) Go away received\nI0615 12:56:05.878414 3257 log.go:172] (0xc0003184d0) (0xc000752640) Stream removed, broadcasting: 1\nI0615 12:56:05.878445 3257 log.go:172] (0xc0003184d0) (0xc0007526e0) Stream removed, broadcasting: 3\nI0615 12:56:05.878466 3257 log.go:172] (0xc0003184d0) (0xc0005c8e60) Stream removed, broadcasting: 5\n" Jun 15 12:56:05.884: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 12:56:05.884: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 12:56:15.904: INFO: Waiting for StatefulSet e2e-tests-statefulset-qsrs4/ss2 to complete update Jun 15 12:56:15.904: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 15 12:56:15.904: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 15 12:56:25.919: INFO: Waiting for StatefulSet e2e-tests-statefulset-qsrs4/ss2 to complete update Jun 15 12:56:25.919: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 15 12:56:35.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qsrs4 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 15 12:56:36.256: INFO: stderr: "I0615 12:56:36.048347 3280 log.go:172] (0xc000138580) (0xc0006e65a0) Create stream\nI0615 12:56:36.048433 3280 log.go:172] (0xc000138580) (0xc0006e65a0) Stream added, broadcasting: 1\nI0615 12:56:36.051367 3280 log.go:172] (0xc000138580) Reply frame received for 1\nI0615 12:56:36.051410 3280 log.go:172] (0xc000138580) (0xc0006e6640) Create stream\nI0615 12:56:36.051420 3280 log.go:172] (0xc000138580) (0xc0006e6640) Stream added, broadcasting: 3\nI0615 12:56:36.052395 3280 log.go:172] (0xc000138580) Reply frame received for 3\nI0615 12:56:36.052433 3280 log.go:172] (0xc000138580) (0xc000616c80) Create stream\nI0615 12:56:36.052452 3280 log.go:172] (0xc000138580) (0xc000616c80) Stream added, broadcasting: 5\nI0615 12:56:36.053650 3280 log.go:172] (0xc000138580) Reply frame received for 5\nI0615 12:56:36.248541 3280 log.go:172] (0xc000138580) Data frame received for 5\nI0615 12:56:36.248588 3280 log.go:172] (0xc000616c80) (5) Data frame handling\nI0615 12:56:36.248620 3280 log.go:172] (0xc000138580) Data frame received for 3\nI0615 12:56:36.248646 3280 log.go:172] (0xc0006e6640) (3) Data frame handling\nI0615 12:56:36.248663 3280 log.go:172] (0xc0006e6640) (3) Data frame sent\nI0615 12:56:36.248683 3280 log.go:172] (0xc000138580) Data frame received for 3\nI0615 12:56:36.248694 3280 log.go:172] (0xc0006e6640) (3) Data frame handling\nI0615 12:56:36.251165 3280 log.go:172] (0xc000138580) Data frame received for 1\nI0615 12:56:36.251185 3280 log.go:172] (0xc0006e65a0) (1) Data frame handling\nI0615 12:56:36.251197 3280 log.go:172] (0xc0006e65a0) (1) Data frame sent\nI0615 12:56:36.251206 3280 log.go:172] (0xc000138580) (0xc0006e65a0) Stream removed, broadcasting: 1\nI0615 12:56:36.251346 3280 log.go:172] (0xc000138580) (0xc0006e65a0) Stream removed, broadcasting: 1\nI0615 12:56:36.251369 3280 log.go:172] (0xc000138580) (0xc0006e6640) Stream removed, broadcasting: 3\nI0615 12:56:36.251378 3280 log.go:172] (0xc000138580) (0xc000616c80) Stream removed, broadcasting: 5\n" Jun 15 12:56:36.256: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 15 12:56:36.256: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 15 12:56:46.289: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 15 12:56:56.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qsrs4 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 15 12:56:56.537: INFO: stderr: "I0615 12:56:56.439234 3302 log.go:172] (0xc00015c630) (0xc000691180) Create stream\nI0615 12:56:56.439288 3302 log.go:172] (0xc00015c630) (0xc000691180) Stream added, broadcasting: 1\nI0615 12:56:56.449435 3302 log.go:172] (0xc00015c630) Reply frame received for 1\nI0615 12:56:56.449499 3302 log.go:172] (0xc00015c630) (0xc000736000) Create stream\nI0615 12:56:56.449514 3302 log.go:172] (0xc00015c630) (0xc000736000) Stream added, broadcasting: 3\nI0615 12:56:56.455693 3302 log.go:172] (0xc00015c630) Reply frame received for 3\nI0615 12:56:56.455750 3302 log.go:172] (0xc00015c630) (0xc000712000) Create stream\nI0615 12:56:56.455763 3302 log.go:172] (0xc00015c630) (0xc000712000) Stream added, broadcasting: 5\nI0615 12:56:56.457748 3302 log.go:172] (0xc00015c630) Reply frame received for 5\nI0615 12:56:56.531794 3302 log.go:172] (0xc00015c630) Data frame received for 3\nI0615 12:56:56.531825 3302 log.go:172] (0xc000736000) (3) Data frame handling\nI0615 12:56:56.531841 3302 log.go:172] (0xc000736000) (3) Data frame sent\nI0615 12:56:56.531852 3302 log.go:172] (0xc00015c630) Data frame received for 5\nI0615 12:56:56.531875 3302 log.go:172] (0xc000712000) (5) Data frame handling\nI0615 12:56:56.531894 3302 log.go:172] (0xc00015c630) Data frame received for 3\nI0615 12:56:56.531901 3302 log.go:172] (0xc000736000) (3) Data frame handling\nI0615 12:56:56.532956 3302 log.go:172] (0xc00015c630) Data frame received for 1\nI0615 12:56:56.532976 3302 log.go:172] (0xc000691180) (1) Data frame handling\nI0615 12:56:56.532996 3302 log.go:172] (0xc000691180) (1) Data frame sent\nI0615 12:56:56.533007 3302 log.go:172] (0xc00015c630) (0xc000691180) Stream removed, broadcasting: 1\nI0615 12:56:56.533022 3302 log.go:172] (0xc00015c630) Go away received\nI0615 12:56:56.533423 3302 log.go:172] (0xc00015c630) (0xc000691180) Stream removed, broadcasting: 1\nI0615 12:56:56.533445 3302 log.go:172] (0xc00015c630) (0xc000736000) Stream removed, broadcasting: 3\nI0615 12:56:56.533456 3302 log.go:172] (0xc00015c630) (0xc000712000) Stream removed, broadcasting: 5\n" Jun 15 12:56:56.537: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 15 12:56:56.537: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 15 12:57:06.558: INFO: Waiting for StatefulSet e2e-tests-statefulset-qsrs4/ss2 to complete update Jun 15 12:57:06.558: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 15 12:57:06.558: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 15 12:57:06.558: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 15 12:57:16.567: INFO: Waiting for StatefulSet e2e-tests-statefulset-qsrs4/ss2 to complete update Jun 15 12:57:16.567: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 15 12:57:16.567: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 15 12:57:26.567: INFO: Waiting for StatefulSet e2e-tests-statefulset-qsrs4/ss2 to complete update Jun 15 12:57:26.567: INFO: Waiting for Pod e2e-tests-statefulset-qsrs4/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jun 15 12:57:36.568: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qsrs4 Jun 15 12:57:36.570: INFO: Scaling statefulset ss2 to 0 Jun 15 12:57:56.588: INFO: Waiting for statefulset status.replicas updated to 0 Jun 15 12:57:56.592: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:57:56.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qsrs4" for this suite. Jun 15 12:58:04.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:58:04.766: INFO: namespace: e2e-tests-statefulset-qsrs4, resource: bindings, ignored listing per whitelist Jun 15 12:58:04.786: INFO: namespace e2e-tests-statefulset-qsrs4 deletion completed in 8.152989051s • [SLOW TEST:159.688 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jun 15 12:58:04.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 15 12:58:04.928: INFO: Waiting up to 5m0s for pod "pod-da3a4d25-af07-11ea-99db-0242ac11001b" in namespace "e2e-tests-emptydir-6qntv" to be "success or failure" Jun 15 12:58:04.931: INFO: Pod "pod-da3a4d25-af07-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.951434ms Jun 15 12:58:06.936: INFO: Pod "pod-da3a4d25-af07-11ea-99db-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007324151s Jun 15 12:58:08.940: INFO: Pod "pod-da3a4d25-af07-11ea-99db-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011330281s Jun 15 12:58:10.944: INFO: Pod "pod-da3a4d25-af07-11ea-99db-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015227173s STEP: Saw pod success Jun 15 12:58:10.944: INFO: Pod "pod-da3a4d25-af07-11ea-99db-0242ac11001b" satisfied condition "success or failure" Jun 15 12:58:10.947: INFO: Trying to get logs from node hunter-worker pod pod-da3a4d25-af07-11ea-99db-0242ac11001b container test-container: STEP: delete the pod Jun 15 12:58:10.995: INFO: Waiting for pod pod-da3a4d25-af07-11ea-99db-0242ac11001b to disappear Jun 15 12:58:11.037: INFO: Pod pod-da3a4d25-af07-11ea-99db-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jun 15 12:58:11.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6qntv" for this suite. Jun 15 12:58:17.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 15 12:58:17.102: INFO: namespace: e2e-tests-emptydir-6qntv, resource: bindings, ignored listing per whitelist Jun 15 12:58:17.127: INFO: namespace e2e-tests-emptydir-6qntv deletion completed in 6.085346261s • [SLOW TEST:12.340 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SJun 15 12:58:17.127: INFO: Running AfterSuite actions on all nodes Jun 15 12:58:17.127: INFO: Running AfterSuite actions on node 1 Jun 15 12:58:17.127: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 7882.598 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS