I1221 10:47:47.538076 8 e2e.go:224] Starting e2e run "5312e9ae-23df-11ea-bbd3-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576925266 - Will randomize all specs Will run 201 of 2164 specs Dec 21 10:47:47.967: INFO: >>> kubeConfig: /root/.kube/config Dec 21 10:47:47.971: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 21 10:47:47.991: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 21 10:47:48.024: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 21 10:47:48.024: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 21 10:47:48.024: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 21 10:47:48.032: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 21 10:47:48.032: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 21 10:47:48.032: INFO: e2e test version: v1.13.12 Dec 21 10:47:48.035: INFO: kube-apiserver version: v1.13.8 [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:47:48.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Dec 21 10:47:48.210: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-47jdt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-47jdt.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 21 10:48:06.412: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.488: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.517: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.556: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.576: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.640: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.662: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.681: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.696: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.708: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.712: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.715: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.719: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.723: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.726: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.730: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.733: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.736: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.741: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.746: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-53ff2918-23df-11ea-bbd3-0242ac110005) Dec 21 10:48:06.746: INFO: Lookups using e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-47jdt.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 21 10:48:11.932: INFO: DNS probes using e2e-tests-dns-47jdt/dns-test-53ff2918-23df-11ea-bbd3-0242ac110005 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:48:12.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-47jdt" for this suite. Dec 21 10:48:20.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:48:20.392: INFO: namespace: e2e-tests-dns-47jdt, resource: bindings, ignored listing per whitelist Dec 21 10:48:20.423: INFO: namespace e2e-tests-dns-47jdt deletion completed in 8.262181435s • [SLOW TEST:32.389 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:48:20.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 21 10:48:20.761: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r5lm,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r5lm/configmaps/e2e-watch-test-watch-closed,UID:675b7dea-23df-11ea-a994-fa163e34d433,ResourceVersion:15554794,Generation:0,CreationTimestamp:2019-12-21 10:48:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 21 10:48:20.762: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r5lm,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r5lm/configmaps/e2e-watch-test-watch-closed,UID:675b7dea-23df-11ea-a994-fa163e34d433,ResourceVersion:15554795,Generation:0,CreationTimestamp:2019-12-21 10:48:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 21 10:48:20.789: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r5lm,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r5lm/configmaps/e2e-watch-test-watch-closed,UID:675b7dea-23df-11ea-a994-fa163e34d433,ResourceVersion:15554796,Generation:0,CreationTimestamp:2019-12-21 10:48:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 21 10:48:20.789: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-8r5lm,SelfLink:/api/v1/namespaces/e2e-tests-watch-8r5lm/configmaps/e2e-watch-test-watch-closed,UID:675b7dea-23df-11ea-a994-fa163e34d433,ResourceVersion:15554797,Generation:0,CreationTimestamp:2019-12-21 10:48:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:48:20.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-8r5lm" for this suite. Dec 21 10:48:26.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:48:26.906: INFO: namespace: e2e-tests-watch-8r5lm, resource: bindings, ignored listing per whitelist Dec 21 10:48:27.028: INFO: namespace e2e-tests-watch-8r5lm deletion completed in 6.233235842s • [SLOW TEST:6.605 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:48:27.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 21 10:48:27.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 21 10:48:27.369: INFO: stderr: "" Dec 21 10:48:27.369: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:48:27.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-42p6z" for this suite. Dec 21 10:48:33.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:48:33.492: INFO: namespace: e2e-tests-kubectl-42p6z, resource: bindings, ignored listing per whitelist Dec 21 10:48:33.600: INFO: namespace e2e-tests-kubectl-42p6z deletion completed in 6.219457742s • [SLOW TEST:6.571 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:48:33.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 21 10:48:47.223: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:48:48.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-q955c" for this suite. Dec 21 10:49:20.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:49:20.147: INFO: namespace: e2e-tests-replicaset-q955c, resource: bindings, ignored listing per whitelist Dec 21 10:49:20.151: INFO: namespace e2e-tests-replicaset-q955c deletion completed in 30.207822984s • [SLOW TEST:46.551 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:49:20.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 21 10:49:32.932: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8add4fe1-23df-11ea-bbd3-0242ac110005" Dec 21 10:49:32.932: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8add4fe1-23df-11ea-bbd3-0242ac110005" in namespace "e2e-tests-pods-x9xg6" to be "terminated due to deadline exceeded" Dec 21 10:49:32.944: INFO: Pod "pod-update-activedeadlineseconds-8add4fe1-23df-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.568183ms Dec 21 10:49:34.981: INFO: Pod "pod-update-activedeadlineseconds-8add4fe1-23df-11ea-bbd3-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.049081519s Dec 21 10:49:34.981: INFO: Pod "pod-update-activedeadlineseconds-8add4fe1-23df-11ea-bbd3-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:49:34.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-x9xg6" for this suite. Dec 21 10:49:41.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:49:41.144: INFO: namespace: e2e-tests-pods-x9xg6, resource: bindings, ignored listing per whitelist Dec 21 10:49:41.239: INFO: namespace e2e-tests-pods-x9xg6 deletion completed in 6.244223948s • [SLOW TEST:21.087 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:49:41.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 21 10:49:41.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-kpmlm" to be "success or failure" Dec 21 10:49:41.618: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 118.995792ms Dec 21 10:49:43.640: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141287148s Dec 21 10:49:45.652: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153126982s Dec 21 10:49:47.805: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306231603s Dec 21 10:49:49.841: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342022895s Dec 21 10:49:51.869: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.370662958s STEP: Saw pod success Dec 21 10:49:51.870: INFO: Pod "downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005" satisfied condition "success or failure" Dec 21 10:49:51.914: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005 container client-container: STEP: delete the pod Dec 21 10:49:52.948: INFO: Waiting for pod downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005 to disappear Dec 21 10:49:52.956: INFO: Pod downwardapi-volume-977fd056-23df-11ea-bbd3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:49:52.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kpmlm" for this suite. Dec 21 10:49:59.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:49:59.171: INFO: namespace: e2e-tests-projected-kpmlm, resource: bindings, ignored listing per whitelist Dec 21 10:49:59.221: INFO: namespace e2e-tests-projected-kpmlm deletion completed in 6.255074676s • [SLOW TEST:17.982 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:49:59.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-s6rtw in namespace e2e-tests-proxy-qgdp8 I1221 10:49:59.530145 8 runners.go:184] Created replication controller with name: proxy-service-s6rtw, namespace: e2e-tests-proxy-qgdp8, replica count: 1 I1221 10:50:00.581236 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:01.582212 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:02.582809 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:03.583347 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:04.584069 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:05.588010 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:06.589563 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:07.590683 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:08.591367 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:09.591816 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:10.592696 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:11.593753 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1221 10:50:12.594398 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:13.595401 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:14.597062 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:15.598220 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:16.599367 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:17.599866 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:18.600348 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:19.600779 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:20.601293 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:21.601718 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1221 10:50:22.602692 8 runners.go:184] proxy-service-s6rtw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 21 10:50:22.630: INFO: setup took 23.15548283s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 21 10:50:22.658: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgdp8/pods/http:proxy-service-s6rtw-kqpdh:162/proxy/: bar (200; 27.486678ms) Dec 21 10:50:22.658: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgdp8/pods/proxy-service-s6rtw-kqpdh:160/proxy/: foo (200; 27.970269ms) Dec 21 10:50:22.671: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgdp8/services/http:proxy-service-s6rtw:portname2/proxy/: bar (200; 40.058885ms) Dec 21 10:50:22.680: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgdp8/services/https:proxy-service-s6rtw:tlsportname2/proxy/: tls qux (200; 49.8278ms) Dec 21 10:50:22.680: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-qgdp8/pods/https:proxy-service-s6rtw-kqpdh:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 21 10:50:39.805: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ba233037-23df-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001ec8f2a), BlockOwnerDeletion:(*bool)(0xc001ec8f2b)}} Dec 21 10:50:39.969: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ba06a907-23df-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001df219a), BlockOwnerDeletion:(*bool)(0xc001df219b)}} Dec 21 10:50:40.051: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ba15bda0-23df-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001df2372), BlockOwnerDeletion:(*bool)(0xc001df2373)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:50:45.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-44mdw" for this suite. Dec 21 10:50:51.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:50:51.798: INFO: namespace: e2e-tests-gc-44mdw, resource: bindings, ignored listing per whitelist Dec 21 10:50:51.865: INFO: namespace e2e-tests-gc-44mdw deletion completed in 6.620990575s • [SLOW TEST:12.874 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:50:51.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Dec 21 10:51:04.138: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-c19886e3-23df-11ea-bbd3-0242ac110005,GenerateName:,Namespace:e2e-tests-events-pdsmr,SelfLink:/api/v1/namespaces/e2e-tests-events-pdsmr/pods/send-events-c19886e3-23df-11ea-bbd3-0242ac110005,UID:c199cb0c-23df-11ea-a994-fa163e34d433,ResourceVersion:15555192,Generation:0,CreationTimestamp:2019-12-21 10:50:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 89958291,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5pnn9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5pnn9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5pnn9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001677a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001677a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 10:50:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 10:51:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 10:51:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 10:50:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-21 10:50:52 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-21 10:51:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://c9f44615997572163351b4b56a8144525e400aef9c3d372ecce657094a674614}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Dec 21 10:51:06.161: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Dec 21 10:51:08.179: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:51:08.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-pdsmr" for this suite. Dec 21 10:51:48.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:51:48.429: INFO: namespace: e2e-tests-events-pdsmr, resource: bindings, ignored listing per whitelist Dec 21 10:51:48.547: INFO: namespace e2e-tests-events-pdsmr deletion completed in 40.286819525s • [SLOW TEST:56.682 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:51:48.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 21 10:51:48.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-f45gg" to be "success or failure" Dec 21 10:51:49.012: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.391654ms Dec 21 10:51:51.033: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075937682s Dec 21 10:51:53.194: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237413485s Dec 21 10:51:55.204: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247502251s Dec 21 10:51:57.221: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264220324s Dec 21 10:51:59.234: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.277516062s Dec 21 10:52:01.247: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.289694899s STEP: Saw pod success Dec 21 10:52:01.247: INFO: Pod "downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005" satisfied condition "success or failure" Dec 21 10:52:01.249: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005 container client-container: STEP: delete the pod Dec 21 10:52:02.085: INFO: Waiting for pod downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005 to disappear Dec 21 10:52:02.396: INFO: Pod downwardapi-volume-e371901c-23df-11ea-bbd3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:52:02.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f45gg" for this suite. Dec 21 10:52:08.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:52:08.724: INFO: namespace: e2e-tests-projected-f45gg, resource: bindings, ignored listing per whitelist Dec 21 10:52:08.899: INFO: namespace e2e-tests-projected-f45gg deletion completed in 6.482354751s • [SLOW TEST:20.351 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:52:08.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pr88x [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Dec 21 10:52:09.182: INFO: Found 0 stateful pods, waiting for 3 Dec 21 10:52:19.305: INFO: Found 1 stateful pods, waiting for 3 Dec 21 10:52:29.306: INFO: Found 2 stateful pods, waiting for 3 Dec 21 10:52:39.362: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 21 10:52:39.363: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 21 10:52:39.363: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 21 10:52:49.194: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 21 10:52:49.194: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 21 10:52:49.194: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Dec 21 10:52:49.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pr88x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 21 10:52:49.941: INFO: stderr: "" Dec 21 10:52:49.942: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 21 10:52:49.942: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 21 10:53:00.236: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Dec 21 10:53:10.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pr88x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 10:53:11.254: INFO: stderr: "" Dec 21 10:53:11.254: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 21 10:53:11.254: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 21 10:53:21.315: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:53:21.315: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:21.315: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:21.315: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:31.353: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:53:31.353: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:31.353: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:41.350: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:53:41.350: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:41.350: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:53:51.348: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:53:51.348: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 21 10:54:01.349: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update STEP: Rolling back to a previous revision Dec 21 10:54:11.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pr88x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 21 10:54:12.327: INFO: stderr: "" Dec 21 10:54:12.327: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 21 10:54:12.327: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 21 10:54:22.437: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Dec 21 10:54:32.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pr88x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 10:54:33.144: INFO: stderr: "" Dec 21 10:54:33.144: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 21 10:54:33.144: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 21 10:54:43.204: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:54:43.205: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:54:43.205: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:54:43.205: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:54:53.288: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:54:53.288: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:54:53.288: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:55:03.229: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:55:03.229: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:55:03.229: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:55:13.233: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:55:13.233: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:55:23.231: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:55:23.231: INFO: Waiting for Pod e2e-tests-statefulset-pr88x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Dec 21 10:55:33.233: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update Dec 21 10:55:43.224: INFO: Waiting for StatefulSet e2e-tests-statefulset-pr88x/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 21 10:55:53.228: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pr88x Dec 21 10:55:53.234: INFO: Scaling statefulset ss2 to 0 Dec 21 10:56:23.318: INFO: Waiting for statefulset status.replicas updated to 0 Dec 21 10:56:23.324: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:56:23.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pr88x" for this suite. Dec 21 10:56:31.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:56:31.627: INFO: namespace: e2e-tests-statefulset-pr88x, resource: bindings, ignored listing per whitelist Dec 21 10:56:31.771: INFO: namespace e2e-tests-statefulset-pr88x deletion completed in 8.383575s • [SLOW TEST:262.871 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:56:31.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 21 10:56:52.258: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:56:52.272: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:56:54.274: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:56:54.304: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:56:56.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:56:56.290: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:56:58.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:56:58.293: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:00.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:00.290: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:02.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:02.285: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:04.272: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:04.281: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:06.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:06.340: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:08.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:08.289: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:10.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:10.287: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:12.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:12.289: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:14.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:14.283: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:16.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:16.304: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:18.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:18.290: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:20.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:20.352: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:22.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:22.283: INFO: Pod pod-with-prestop-exec-hook still exists Dec 21 10:57:24.273: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 21 10:57:24.300: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:57:24.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gkvzf" for this suite. Dec 21 10:57:48.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:57:48.578: INFO: namespace: e2e-tests-container-lifecycle-hook-gkvzf, resource: bindings, ignored listing per whitelist Dec 21 10:57:48.697: INFO: namespace e2e-tests-container-lifecycle-hook-gkvzf deletion completed in 24.296665181s • [SLOW TEST:76.925 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:57:48.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-ba0164ac-23e0-11ea-bbd3-0242ac110005 STEP: Creating secret with name s-test-opt-upd-ba016675-23e0-11ea-bbd3-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ba0164ac-23e0-11ea-bbd3-0242ac110005 STEP: Updating secret s-test-opt-upd-ba016675-23e0-11ea-bbd3-0242ac110005 STEP: Creating secret with name s-test-opt-create-ba016694-23e0-11ea-bbd3-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 10:59:22.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5krzs" for this suite. Dec 21 10:59:46.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 10:59:46.829: INFO: namespace: e2e-tests-secrets-5krzs, resource: bindings, ignored listing per whitelist Dec 21 10:59:46.829: INFO: namespace e2e-tests-secrets-5krzs deletion completed in 24.34069589s • [SLOW TEST:118.132 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 10:59:46.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-0067cf7d-23e1-11ea-bbd3-0242ac110005 STEP: Creating a pod to test consume configMaps Dec 21 10:59:47.010: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-5k6c4" to be "success or failure" Dec 21 10:59:47.021: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.366536ms Dec 21 10:59:49.041: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030701582s Dec 21 10:59:51.050: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040047469s Dec 21 10:59:53.140: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129887075s Dec 21 10:59:55.180: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170316338s Dec 21 10:59:59.610: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.599614022s Dec 21 11:00:01.754: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.743506042s STEP: Saw pod success Dec 21 11:00:01.754: INFO: Pod "pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005" satisfied condition "success or failure" Dec 21 11:00:01.781: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Dec 21 11:00:02.193: INFO: Waiting for pod pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005 to disappear Dec 21 11:00:02.214: INFO: Pod pod-projected-configmaps-0068c8ef-23e1-11ea-bbd3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:00:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5k6c4" for this suite. Dec 21 11:00:08.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:00:08.349: INFO: namespace: e2e-tests-projected-5k6c4, resource: bindings, ignored listing per whitelist Dec 21 11:00:08.425: INFO: namespace e2e-tests-projected-5k6c4 deletion completed in 6.196564549s • [SLOW TEST:21.597 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:00:08.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 21 11:00:08.720: INFO: PodSpec: initContainers in spec.initContainers Dec 21 11:01:17.228: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0d5f9500-23e1-11ea-bbd3-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-j6h2s", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-j6h2s/pods/pod-init-0d5f9500-23e1-11ea-bbd3-0242ac110005", UID:"0d64396d-23e1-11ea-a994-fa163e34d433", ResourceVersion:"15556440", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712522808, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"720384553"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5tqfb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001944180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5tqfb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5tqfb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5tqfb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e1e6b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c8c060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e1e8d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e1e8f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001e1e8f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e1e8fc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712522808, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712522808, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712522808, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712522808, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000e3c040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f1730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f17a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e4f1764ec3f5d1248e5d17c266a2ae67e2e44e30395dd9e99b8acfba2dbbfc8b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000e3c080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000e3c060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:01:17.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-j6h2s" for this suite. Dec 21 11:01:41.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:01:41.550: INFO: namespace: e2e-tests-init-container-j6h2s, resource: bindings, ignored listing per whitelist Dec 21 11:01:41.607: INFO: namespace e2e-tests-init-container-j6h2s deletion completed in 24.280504947s • [SLOW TEST:93.182 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:01:41.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Dec 21 11:01:41.807: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556491,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 21 11:01:41.808: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556491,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Dec 21 11:01:51.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556504,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 21 11:01:51.846: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556504,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Dec 21 11:02:01.898: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556517,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 21 11:02:01.898: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556517,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Dec 21 11:02:11.935: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556530,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 21 11:02:11.935: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-a,UID:44dac633-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556530,Generation:0,CreationTimestamp:2019-12-21 11:01:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Dec 21 11:02:21.968: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-b,UID:5cc859f6-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556543,Generation:0,CreationTimestamp:2019-12-21 11:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 21 11:02:21.969: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-b,UID:5cc859f6-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556543,Generation:0,CreationTimestamp:2019-12-21 11:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Dec 21 11:02:31.996: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-b,UID:5cc859f6-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556556,Generation:0,CreationTimestamp:2019-12-21 11:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 21 11:02:31.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-tt9zv,SelfLink:/api/v1/namespaces/e2e-tests-watch-tt9zv/configmaps/e2e-watch-test-configmap-b,UID:5cc859f6-23e1-11ea-a994-fa163e34d433,ResourceVersion:15556556,Generation:0,CreationTimestamp:2019-12-21 11:02:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:02:41.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-tt9zv" for this suite. Dec 21 11:02:48.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:02:48.119: INFO: namespace: e2e-tests-watch-tt9zv, resource: bindings, ignored listing per whitelist Dec 21 11:02:48.303: INFO: namespace e2e-tests-watch-tt9zv deletion completed in 6.289928435s • [SLOW TEST:66.696 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:02:48.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-5hrs STEP: Creating a pod to test atomic-volume-subpath Dec 21 11:02:48.603: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5hrs" in namespace "e2e-tests-subpath-whvhm" to be "success or failure" Dec 21 11:02:48.647: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 43.554676ms Dec 21 11:02:50.666: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06283175s Dec 21 11:02:52.695: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091849827s Dec 21 11:02:55.100: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496570351s Dec 21 11:02:57.114: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510729376s Dec 21 11:02:59.134: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.530621006s Dec 21 11:03:01.143: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.539570119s Dec 21 11:03:03.177: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.573578937s Dec 21 11:03:05.199: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.596057737s Dec 21 11:03:07.310: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.70717155s Dec 21 11:03:09.374: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 20.771049747s Dec 21 11:03:11.467: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 22.864022791s Dec 21 11:03:13.481: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 24.877792582s Dec 21 11:03:15.537: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 26.934497221s Dec 21 11:03:17.552: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 28.949462642s Dec 21 11:03:19.559: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 30.956332803s Dec 21 11:03:22.257: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 33.65371656s Dec 21 11:03:24.668: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Running", Reason="", readiness=false. Elapsed: 36.064912963s Dec 21 11:03:26.683: INFO: Pod "pod-subpath-test-configmap-5hrs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.079832434s STEP: Saw pod success Dec 21 11:03:26.683: INFO: Pod "pod-subpath-test-configmap-5hrs" satisfied condition "success or failure" Dec 21 11:03:26.687: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-5hrs container test-container-subpath-configmap-5hrs: STEP: delete the pod Dec 21 11:03:29.901: INFO: Waiting for pod pod-subpath-test-configmap-5hrs to disappear Dec 21 11:03:29.931: INFO: Pod pod-subpath-test-configmap-5hrs no longer exists STEP: Deleting pod pod-subpath-test-configmap-5hrs Dec 21 11:03:29.931: INFO: Deleting pod "pod-subpath-test-configmap-5hrs" in namespace "e2e-tests-subpath-whvhm" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:03:30.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-whvhm" for this suite. Dec 21 11:03:36.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:03:36.445: INFO: namespace: e2e-tests-subpath-whvhm, resource: bindings, ignored listing per whitelist Dec 21 11:03:36.474: INFO: namespace e2e-tests-subpath-whvhm deletion completed in 6.377340309s • [SLOW TEST:48.170 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:03:36.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Dec 21 11:03:36.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-jsccc" to be "success or failure" Dec 21 11:03:36.958: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.762901ms Dec 21 11:03:39.419: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482049746s Dec 21 11:03:41.437: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.500198539s Dec 21 11:03:43.520: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58299321s Dec 21 11:03:46.778: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.840791751s Dec 21 11:03:48.787: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.850276718s Dec 21 11:03:52.631: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.693744905s STEP: Saw pod success Dec 21 11:03:52.631: INFO: Pod "downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005" satisfied condition "success or failure" Dec 21 11:03:52.712: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005 container client-container: STEP: delete the pod Dec 21 11:03:55.483: INFO: Waiting for pod downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005 to disappear Dec 21 11:03:55.794: INFO: Pod downwardapi-volume-897993e2-23e1-11ea-bbd3-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:03:55.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jsccc" for this suite. Dec 21 11:04:04.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:04:04.968: INFO: namespace: e2e-tests-projected-jsccc, resource: bindings, ignored listing per whitelist Dec 21 11:04:05.064: INFO: namespace e2e-tests-projected-jsccc deletion completed in 9.239227768s • [SLOW TEST:28.590 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:04:05.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Dec 21 11:04:05.272: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 21 11:04:05.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:09.006: INFO: stderr: "" Dec 21 11:04:09.006: INFO: stdout: "service/redis-slave created\n" Dec 21 11:04:09.006: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 21 11:04:09.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:10.096: INFO: stderr: "" Dec 21 11:04:10.096: INFO: stdout: "service/redis-master created\n" Dec 21 11:04:10.096: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 21 11:04:10.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:10.617: INFO: stderr: "" Dec 21 11:04:10.617: INFO: stdout: "service/frontend created\n" Dec 21 11:04:10.618: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 21 11:04:10.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:11.162: INFO: stderr: "" Dec 21 11:04:11.162: INFO: stdout: "deployment.extensions/frontend created\n" Dec 21 11:04:11.163: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 21 11:04:11.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:11.705: INFO: stderr: "" Dec 21 11:04:11.705: INFO: stdout: "deployment.extensions/redis-master created\n" Dec 21 11:04:11.705: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 21 11:04:11.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:12.285: INFO: stderr: "" Dec 21 11:04:12.285: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Dec 21 11:04:12.285: INFO: Waiting for all frontend pods to be Running. Dec 21 11:04:47.338: INFO: Waiting for frontend to serve content. Dec 21 11:04:48.177: INFO: Trying to add a new entry to the guestbook. Dec 21 11:04:48.204: INFO: Verifying that added entry can be retrieved. Dec 21 11:04:49.257: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 21 11:04:54.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:54.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 21 11:04:54.755: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 21 11:04:54.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:55.054: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 21 11:04:55.054: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 21 11:04:55.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:55.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 21 11:04:55.252: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 21 11:04:55.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:55.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 21 11:04:55.427: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 21 11:04:55.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:55.740: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 21 11:04:55.740: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 21 11:04:55.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-smzpf' Dec 21 11:04:56.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 21 11:04:56.028: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:04:56.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-smzpf" for this suite. Dec 21 11:05:38.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:05:38.387: INFO: namespace: e2e-tests-kubectl-smzpf, resource: bindings, ignored listing per whitelist Dec 21 11:05:38.395: INFO: namespace e2e-tests-kubectl-smzpf deletion completed in 42.303604046s • [SLOW TEST:93.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:05:38.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Dec 21 11:05:38.596: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:06:01.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-79w9l" for this suite. Dec 21 11:06:09.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:06:09.537: INFO: namespace: e2e-tests-init-container-79w9l, resource: bindings, ignored listing per whitelist Dec 21 11:06:09.587: INFO: namespace e2e-tests-init-container-79w9l deletion completed in 8.411748464s • [SLOW TEST:31.191 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:06:09.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Dec 21 11:06:10.576: INFO: created pod pod-service-account-defaultsa Dec 21 11:06:10.576: INFO: pod pod-service-account-defaultsa service account token volume mount: true Dec 21 11:06:10.598: INFO: created pod pod-service-account-mountsa Dec 21 11:06:10.598: INFO: pod pod-service-account-mountsa service account token volume mount: true Dec 21 11:06:10.728: INFO: created pod pod-service-account-nomountsa Dec 21 11:06:10.728: INFO: pod pod-service-account-nomountsa service account token volume mount: false Dec 21 11:06:10.784: INFO: created pod pod-service-account-defaultsa-mountspec Dec 21 11:06:10.784: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Dec 21 11:06:10.835: INFO: created pod pod-service-account-mountsa-mountspec Dec 21 11:06:10.835: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Dec 21 11:06:10.966: INFO: created pod pod-service-account-nomountsa-mountspec Dec 21 11:06:10.966: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Dec 21 11:06:11.033: INFO: created pod pod-service-account-defaultsa-nomountspec Dec 21 11:06:11.033: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Dec 21 11:06:11.952: INFO: created pod pod-service-account-mountsa-nomountspec Dec 21 11:06:11.953: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Dec 21 11:06:12.458: INFO: created pod pod-service-account-nomountsa-nomountspec Dec 21 11:06:12.458: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:06:12.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-x6zvx" for this suite. Dec 21 11:06:47.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:06:47.253: INFO: namespace: e2e-tests-svcaccounts-x6zvx, resource: bindings, ignored listing per whitelist Dec 21 11:06:47.344: INFO: namespace e2e-tests-svcaccounts-x6zvx deletion completed in 32.949728247s • [SLOW TEST:37.757 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:06:47.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Dec 21 11:06:48.297: INFO: Waiting up to 5m0s for pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68" in namespace "e2e-tests-svcaccounts-mj6bp" to be "success or failure" Dec 21 11:06:48.376: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 79.352421ms Dec 21 11:06:50.609: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312525939s Dec 21 11:06:52.659: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362468663s Dec 21 11:06:55.384: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.086829959s Dec 21 11:06:57.400: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 9.103286172s Dec 21 11:07:00.715: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 12.41868428s Dec 21 11:07:03.721: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 15.423832525s Dec 21 11:07:05.742: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Pending", Reason="", readiness=false. Elapsed: 17.445090301s Dec 21 11:07:07.771: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.473923305s STEP: Saw pod success Dec 21 11:07:07.771: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68" satisfied condition "success or failure" Dec 21 11:07:07.782: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68 container token-test: STEP: delete the pod Dec 21 11:07:07.934: INFO: Waiting for pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68 to disappear Dec 21 11:07:07.982: INFO: Pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-zxr68 no longer exists STEP: Creating a pod to test consume service account root CA Dec 21 11:07:08.006: INFO: Waiting up to 5m0s for pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz" in namespace "e2e-tests-svcaccounts-mj6bp" to be "success or failure" Dec 21 11:07:08.208: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 202.294838ms Dec 21 11:07:12.349: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343169242s Dec 21 11:07:14.430: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424561587s Dec 21 11:07:16.449: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.443725603s Dec 21 11:07:18.472: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.466148437s Dec 21 11:07:21.159: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 13.153164029s Dec 21 11:07:23.170: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 15.164195544s Dec 21 11:07:25.317: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.311820814s Dec 21 11:07:27.328: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.322082709s Dec 21 11:07:29.342: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.336643535s STEP: Saw pod success Dec 21 11:07:29.342: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz" satisfied condition "success or failure" Dec 21 11:07:29.346: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz container root-ca-test: STEP: delete the pod Dec 21 11:07:30.235: INFO: Waiting for pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz to disappear Dec 21 11:07:30.279: INFO: Pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-dx8qz no longer exists STEP: Creating a pod to test consume service account namespace Dec 21 11:07:30.296: INFO: Waiting up to 5m0s for pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7" in namespace "e2e-tests-svcaccounts-mj6bp" to be "success or failure" Dec 21 11:07:30.456: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 160.258403ms Dec 21 11:07:33.602: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306308716s Dec 21 11:07:35.624: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.327645218s Dec 21 11:07:38.757: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.461444317s Dec 21 11:07:40.783: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.486934968s Dec 21 11:07:42.793: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.497229192s Dec 21 11:07:45.920: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.623781186s Dec 21 11:07:48.084: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.787910086s Dec 21 11:07:52.151: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.854715196s Dec 21 11:07:54.443: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.146687275s Dec 21 11:07:56.479: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.183330437s Dec 21 11:07:58.568: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.27247094s STEP: Saw pod success Dec 21 11:07:58.569: INFO: Pod "pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7" satisfied condition "success or failure" Dec 21 11:07:58.616: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7 container namespace-test: STEP: delete the pod Dec 21 11:07:58.697: INFO: Waiting for pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7 to disappear Dec 21 11:07:58.732: INFO: Pod pod-service-account-fb8494c3-23e1-11ea-bbd3-0242ac110005-2kfj7 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:07:58.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-mj6bp" for this suite. Dec 21 11:08:06.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:08:07.021: INFO: namespace: e2e-tests-svcaccounts-mj6bp, resource: bindings, ignored listing per whitelist Dec 21 11:08:07.036: INFO: namespace e2e-tests-svcaccounts-mj6bp deletion completed in 8.282628757s • [SLOW TEST:79.691 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:08:07.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6db2l [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6db2l STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6db2l Dec 21 11:08:07.366: INFO: Found 0 stateful pods, waiting for 1 Dec 21 11:08:17.387: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Dec 21 11:08:17.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 21 11:08:18.056: INFO: stderr: "" Dec 21 11:08:18.056: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 21 11:08:18.056: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 21 11:08:18.064: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 21 11:08:28.099: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 21 11:08:28.099: INFO: Waiting for statefulset status.replicas updated to 0 Dec 21 11:08:28.140: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999803s Dec 21 11:08:29.244: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980683722s Dec 21 11:08:30.258: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.877474314s Dec 21 11:08:31.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.862550987s Dec 21 11:08:32.282: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.851675317s Dec 21 11:08:35.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.838750926s Dec 21 11:08:36.241: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.905393867s Dec 21 11:08:37.258: INFO: Verifying statefulset ss doesn't scale past 1 for another 880.043385ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6db2l Dec 21 11:08:38.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:08:38.958: INFO: stderr: "" Dec 21 11:08:38.958: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 21 11:08:38.958: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 21 11:08:38.990: INFO: Found 1 stateful pods, waiting for 3 Dec 21 11:08:49.248: INFO: Found 2 stateful pods, waiting for 3 Dec 21 11:09:00.153: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 21 11:09:00.154: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 21 11:09:00.154: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 21 11:09:09.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 21 11:09:09.008: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 21 11:09:09.008: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Dec 21 11:09:09.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 21 11:09:09.731: INFO: stderr: "" Dec 21 11:09:09.731: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 21 11:09:09.731: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 21 11:09:09.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 21 11:09:10.960: INFO: stderr: "" Dec 21 11:09:10.960: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 21 11:09:10.960: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 21 11:09:10.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 21 11:09:11.612: INFO: stderr: "" Dec 21 11:09:11.612: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 21 11:09:11.612: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 21 11:09:11.612: INFO: Waiting for statefulset status.replicas updated to 0 Dec 21 11:09:11.627: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Dec 21 11:09:22.133: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 21 11:09:22.133: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 21 11:09:22.133: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 21 11:09:22.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999584s Dec 21 11:09:23.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975023413s Dec 21 11:09:24.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.949334694s Dec 21 11:09:25.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.93561261s Dec 21 11:09:26.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.919446604s Dec 21 11:09:27.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.877042452s Dec 21 11:09:28.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.864201876s Dec 21 11:09:29.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.774824106s Dec 21 11:09:30.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.765172822s Dec 21 11:09:31.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 704.522891ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6db2l Dec 21 11:09:32.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:09:33.346: INFO: stderr: "" Dec 21 11:09:33.346: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 21 11:09:33.346: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 21 11:09:33.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:09:34.117: INFO: stderr: "" Dec 21 11:09:34.117: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 21 11:09:34.117: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 21 11:09:34.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:09:35.416: INFO: rc: 126 Dec 21 11:09:35.416: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown command terminated with exit code 126 [] 0xc000107e00 exit status 126 true [0xc0017742c8 0xc0017742e0 0xc0017742f8] [0xc0017742c8 0xc0017742e0 0xc0017742f8] [0xc0017742d8 0xc0017742f0] [0x935700 0x935700] 0xc000cbc600 }: Command stdout: OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "process_linux.go:91: executing setns process caused \"exit status 21\"": unknown stderr: command terminated with exit code 126 error: exit status 126 Dec 21 11:09:45.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:09:45.598: INFO: rc: 1 Dec 21 11:09:45.599: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000107f50 exit status 1 true [0xc001774300 0xc001774318 0xc001774330] [0xc001774300 0xc001774318 0xc001774330] [0xc001774310 0xc001774328] [0x935700 0x935700] 0xc00171eba0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 21 11:09:55.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:09:55.911: INFO: rc: 1 Dec 21 11:09:55.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000db0ea0 exit status 1 true [0xc000a380e8 0xc000a38110 0xc000a38188] [0xc000a380e8 0xc000a38110 0xc000a38188] [0xc000a38108 0xc000a38170] [0x935700 0x935700] 0xc001501440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:10:05.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:10:06.635: INFO: rc: 1 Dec 21 11:10:06.636: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000db1020 exit status 1 true [0xc000a381a0 0xc000a38220 0xc000a38268] [0xc000a381a0 0xc000a38220 0xc000a38268] [0xc000a381e8 0xc000a38260] [0x935700 0x935700] 0xc001501d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:10:16.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:10:16.770: INFO: rc: 1 Dec 21 11:10:16.771: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0002ecc00 exit status 1 true [0xc001774338 0xc001774350 0xc001774368] [0xc001774338 0xc001774350 0xc001774368] [0xc001774348 0xc001774360] [0x935700 0x935700] 0xc00171f0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:10:26.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:10:26.871: INFO: rc: 1 Dec 21 11:10:26.872: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0002ecd20 exit status 1 true [0xc001774370 0xc001774388 0xc0017743a0] [0xc001774370 0xc001774388 0xc0017743a0] [0xc001774380 0xc001774398] [0x935700 0x935700] 0xc00171f740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:10:36.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:10:36.991: INFO: rc: 1 Dec 21 11:10:36.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000db1170 exit status 1 true [0xc000a38270 0xc000a382d8 0xc000a38358] [0xc000a38270 0xc000a382d8 0xc000a38358] [0xc000a382d0 0xc000a38328] [0x935700 0x935700] 0xc001af8ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:10:46.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:10:47.154: INFO: rc: 1 Dec 21 11:10:47.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0002ecea0 exit status 1 true [0xc0017743a8 0xc0017743c0 0xc0017743d8] [0xc0017743a8 0xc0017743c0 0xc0017743d8] [0xc0017743b8 0xc0017743d0] [0x935700 0x935700] 0xc0016ae060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:10:57.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:10:57.306: INFO: rc: 1 Dec 21 11:10:57.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000d51f50 exit status 1 true [0xc001b96148 0xc001b96170 0xc001b961a8] [0xc001b96148 0xc001b96170 0xc001b961a8] [0xc001b96168 0xc001b961a0] [0x935700 0x935700] 0xc0019b2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:11:07.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:11:07.439: INFO: rc: 1 Dec 21 11:11:07.439: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0001070e0 exit status 1 true [0xc000a38038 0xc000a38078 0xc000a38100] [0xc000a38038 0xc000a38078 0xc000a38100] [0xc000a38070 0xc000a380e8] [0x935700 0x935700] 0xc00171e720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:11:17.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:11:17.559: INFO: rc: 1 Dec 21 11:11:17.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e42120 exit status 1 true [0xc000da4000 0xc000da4018 0xc000da4030] [0xc000da4000 0xc000da4018 0xc000da4030] [0xc000da4010 0xc000da4028] [0x935700 0x935700] 0xc00119b680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:11:27.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:11:27.680: INFO: rc: 1 Dec 21 11:11:27.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e422d0 exit status 1 true [0xc000da4038 0xc000da4050 0xc000da4068] [0xc000da4038 0xc000da4050 0xc000da4068] [0xc000da4048 0xc000da4060] [0x935700 0x935700] 0xc000cbc000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:11:37.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:11:37.798: INFO: rc: 1 Dec 21 11:11:37.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e423f0 exit status 1 true [0xc000da4070 0xc000da4088 0xc000da40a0] [0xc000da4070 0xc000da4088 0xc000da40a0] [0xc000da4080 0xc000da4098] [0x935700 0x935700] 0xc000cbc300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:11:47.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:11:47.930: INFO: rc: 1 Dec 21 11:11:47.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000107230 exit status 1 true [0xc000a38108 0xc000a38170 0xc000a381b0] [0xc000a38108 0xc000a38170 0xc000a381b0] [0xc000a38128 0xc000a381a0] [0x935700 0x935700] 0xc00171ef60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:11:57.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:11:58.054: INFO: rc: 1 Dec 21 11:11:58.054: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000107380 exit status 1 true [0xc000a381e8 0xc000a38260 0xc000a382b0] [0xc000a381e8 0xc000a38260 0xc000a382b0] [0xc000a38248 0xc000a38270] [0x935700 0x935700] 0xc00171f560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:12:08.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:12:08.175: INFO: rc: 1 Dec 21 11:12:08.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0001074a0 exit status 1 true [0xc000a382d0 0xc000a38328 0xc000a38368] [0xc000a382d0 0xc000a38328 0xc000a38368] [0xc000a382f0 0xc000a38360] [0x935700 0x935700] 0xc00171fc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:12:18.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:12:18.280: INFO: rc: 1 Dec 21 11:12:18.280: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000db0120 exit status 1 true [0xc001b96018 0xc001b96040 0xc001b96080] [0xc001b96018 0xc001b96040 0xc001b96080] [0xc001b96038 0xc001b96068] [0x935700 0x935700] 0xc001500240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:12:28.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:12:28.384: INFO: rc: 1 Dec 21 11:12:28.385: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e425d0 exit status 1 true [0xc000da40a8 0xc000da40c0 0xc000da40d8] [0xc000da40a8 0xc000da40c0 0xc000da40d8] [0xc000da40b8 0xc000da40d0] [0x935700 0x935700] 0xc000cbc600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:12:38.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:12:38.544: INFO: rc: 1 Dec 21 11:12:38.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e42720 exit status 1 true [0xc000da40e0 0xc000da40f8 0xc000da4110] [0xc000da40e0 0xc000da40f8 0xc000da4110] [0xc000da40f0 0xc000da4108] [0x935700 0x935700] 0xc0015b2fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:12:48.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:12:48.685: INFO: rc: 1 Dec 21 11:12:48.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e42840 exit status 1 true [0xc000da4118 0xc000da4130 0xc000da4148] [0xc000da4118 0xc000da4130 0xc000da4148] [0xc000da4128 0xc000da4140] [0x935700 0x935700] 0xc0015b37a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:12:58.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:12:58.793: INFO: rc: 1 Dec 21 11:12:58.793: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000bdc2a0 exit status 1 true [0xc001774000 0xc001774018 0xc001774030] [0xc001774000 0xc001774018 0xc001774030] [0xc001774010 0xc001774028] [0x935700 0x935700] 0xc000970780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:13:08.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:13:08.929: INFO: rc: 1 Dec 21 11:13:08.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000db0150 exit status 1 true [0xc001b96018 0xc001b96040 0xc001b96080] [0xc001b96018 0xc001b96040 0xc001b96080] [0xc001b96038 0xc001b96068] [0x935700 0x935700] 0xc000cbc240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:13:18.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:13:19.214: INFO: rc: 1 Dec 21 11:13:19.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000bdc210 exit status 1 true [0xc000a38038 0xc000a38078 0xc000a38100] [0xc000a38038 0xc000a38078 0xc000a38100] [0xc000a38070 0xc000a380e8] [0x935700 0x935700] 0xc00119a720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:13:29.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:13:30.838: INFO: rc: 1 Dec 21 11:13:30.839: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000bdc360 exit status 1 true [0xc000a38108 0xc000a38170 0xc000a381b0] [0xc000a38108 0xc000a38170 0xc000a381b0] [0xc000a38128 0xc000a381a0] [0x935700 0x935700] 0xc00127c660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:13:40.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:13:42.442: INFO: rc: 1 Dec 21 11:13:42.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000107110 exit status 1 true [0xc001774000 0xc001774018 0xc001774030] [0xc001774000 0xc001774018 0xc001774030] [0xc001774010 0xc001774028] [0x935700 0x935700] 0xc0015001e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:13:52.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:13:52.619: INFO: rc: 1 Dec 21 11:13:52.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000bdc4b0 exit status 1 true [0xc000a381e8 0xc000a38260 0xc000a382b0] [0xc000a381e8 0xc000a38260 0xc000a382b0] [0xc000a38248 0xc000a38270] [0x935700 0x935700] 0xc000970660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:14:02.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:14:02.759: INFO: rc: 1 Dec 21 11:14:02.760: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000e421e0 exit status 1 true [0xc000da4000 0xc000da4018 0xc000da4030] [0xc000da4000 0xc000da4018 0xc000da4030] [0xc000da4010 0xc000da4028] [0x935700 0x935700] 0xc00171ec60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:14:12.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:14:12.931: INFO: rc: 1 Dec 21 11:14:12.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000bdc5d0 exit status 1 true [0xc000a382d0 0xc000a38328 0xc000a38368] [0xc000a382d0 0xc000a38328 0xc000a38368] [0xc000a382f0 0xc000a38360] [0x935700 0x935700] 0xc000970f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:14:22.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:14:23.043: INFO: rc: 1 Dec 21 11:14:23.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0001072f0 exit status 1 true [0xc001774038 0xc001774050 0xc001774068] [0xc001774038 0xc001774050 0xc001774068] [0xc001774048 0xc001774060] [0x935700 0x935700] 0xc0015005a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:14:33.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:14:33.174: INFO: rc: 1 Dec 21 11:14:33.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000bdc6f0 exit status 1 true [0xc000a38370 0xc000a383c0 0xc000a383f0] [0xc000a38370 0xc000a383c0 0xc000a383f0] [0xc000a38390 0xc000a383e8] [0x935700 0x935700] 0xc000971680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Dec 21 11:14:43.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6db2l ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 21 11:14:43.289: INFO: rc: 1 Dec 21 11:14:43.289: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Dec 21 11:14:43.289: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Dec 21 11:14:43.320: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6db2l Dec 21 11:14:43.331: INFO: Scaling statefulset ss to 0 Dec 21 11:14:43.367: INFO: Waiting for statefulset status.replicas updated to 0 Dec 21 11:14:43.369: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Dec 21 11:14:43.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6db2l" for this suite. Dec 21 11:14:51.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 21 11:14:51.548: INFO: namespace: e2e-tests-statefulset-6db2l, resource: bindings, ignored listing per whitelist Dec 21 11:14:51.558: INFO: namespace e2e-tests-statefulset-6db2l deletion completed in 8.16190647s • [SLOW TEST:404.522 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Dec 21 11:14:51.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Dec 21 11:14:52.465: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 18.96843ms)
Dec 21 11:14:52.488: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.450272ms)
Dec 21 11:14:52.501: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.771424ms)
Dec 21 11:14:52.522: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.021474ms)
Dec 21 11:14:52.582: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 60.726392ms)
Dec 21 11:14:52.604: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 21.819582ms)
Dec 21 11:14:52.615: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.289003ms)
Dec 21 11:14:52.623: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.864157ms)
Dec 21 11:14:52.627: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.879779ms)
Dec 21 11:14:52.633: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.094513ms)
Dec 21 11:14:52.638: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.69198ms)
Dec 21 11:14:52.642: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.884361ms)
Dec 21 11:14:52.646: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.760773ms)
Dec 21 11:14:52.651: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.671677ms)
Dec 21 11:14:52.656: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.387871ms)
Dec 21 11:14:52.661: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.589803ms)
Dec 21 11:14:52.669: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.547079ms)
Dec 21 11:14:53.210: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 540.576084ms)
Dec 21 11:14:53.219: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.05434ms)
Dec 21 11:14:53.225: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.777453ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:14:53.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-b86b7" for this suite.
Dec 21 11:14:59.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:14:59.379: INFO: namespace: e2e-tests-proxy-b86b7, resource: bindings, ignored listing per whitelist
Dec 21 11:14:59.398: INFO: namespace e2e-tests-proxy-b86b7 deletion completed in 6.165838732s

• [SLOW TEST:7.840 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:14:59.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 11:14:59.625: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.770812ms)
Dec 21 11:14:59.631: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.5415ms)
Dec 21 11:14:59.638: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.4949ms)
Dec 21 11:14:59.645: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.244174ms)
Dec 21 11:14:59.651: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.535512ms)
Dec 21 11:14:59.656: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.38486ms)
Dec 21 11:14:59.661: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.592275ms)
Dec 21 11:14:59.668: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.149971ms)
Dec 21 11:14:59.715: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 47.193576ms)
Dec 21 11:14:59.722: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.510129ms)
Dec 21 11:14:59.728: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.304582ms)
Dec 21 11:14:59.735: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.333438ms)
Dec 21 11:14:59.741: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.869042ms)
Dec 21 11:14:59.749: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.22884ms)
Dec 21 11:14:59.754: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.328057ms)
Dec 21 11:14:59.760: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.428913ms)
Dec 21 11:14:59.765: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.400171ms)
Dec 21 11:14:59.771: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.792332ms)
Dec 21 11:14:59.775: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.481261ms)
Dec 21 11:14:59.780: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.108086ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:14:59.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-2p9pr" for this suite.
Dec 21 11:15:05.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:15:06.068: INFO: namespace: e2e-tests-proxy-2p9pr, resource: bindings, ignored listing per whitelist
Dec 21 11:15:06.108: INFO: namespace e2e-tests-proxy-2p9pr deletion completed in 6.323652295s

• [SLOW TEST:6.710 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:15:06.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-92z9h
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 11:15:06.707: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 11:15:44.946: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-92z9h PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 11:15:44.946: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 11:15:46.226: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:15:46.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-92z9h" for this suite.
Dec 21 11:16:10.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:16:10.614: INFO: namespace: e2e-tests-pod-network-test-92z9h, resource: bindings, ignored listing per whitelist
Dec 21 11:16:10.798: INFO: namespace e2e-tests-pod-network-test-92z9h deletion completed in 24.560463733s

• [SLOW TEST:64.691 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:16:10.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Dec 21 11:16:11.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 21 11:16:13.422: INFO: stderr: ""
Dec 21 11:16:13.422: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:16:13.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sctk6" for this suite.
Dec 21 11:16:19.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:16:19.671: INFO: namespace: e2e-tests-kubectl-sctk6, resource: bindings, ignored listing per whitelist
Dec 21 11:16:19.741: INFO: namespace e2e-tests-kubectl-sctk6 deletion completed in 6.222361637s

• [SLOW TEST:8.942 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:16:19.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 11:16:19.928: INFO: Waiting up to 5m0s for pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-pr2rf" to be "success or failure"
Dec 21 11:16:19.940: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.989457ms
Dec 21 11:16:22.064: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135778963s
Dec 21 11:16:24.088: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159678783s
Dec 21 11:16:26.097: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168439054s
Dec 21 11:16:28.131: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.20275445s
Dec 21 11:16:30.323: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.394366796s
Dec 21 11:16:32.343: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 12.414015679s
Dec 21 11:16:34.610: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.681483057s
STEP: Saw pod success
Dec 21 11:16:34.610: INFO: Pod "downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:16:34.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 11:16:34.896: INFO: Waiting for pod downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:16:34.938: INFO: Pod downwardapi-volume-504071ab-23e3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:16:34.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pr2rf" for this suite.
Dec 21 11:16:41.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:16:41.054: INFO: namespace: e2e-tests-downward-api-pr2rf, resource: bindings, ignored listing per whitelist
Dec 21 11:16:41.134: INFO: namespace e2e-tests-downward-api-pr2rf deletion completed in 6.167702329s

• [SLOW TEST:21.393 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:16:41.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5cfa583c-23e3-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 11:16:41.279: INFO: Waiting up to 5m0s for pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-c778j" to be "success or failure"
Dec 21 11:16:41.370: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.61213ms
Dec 21 11:16:43.601: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321952468s
Dec 21 11:16:45.613: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33426675s
Dec 21 11:16:47.798: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.519606678s
Dec 21 11:16:50.504: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.225515318s
Dec 21 11:16:52.533: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.254585026s
Dec 21 11:16:54.561: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.282012667s
STEP: Saw pod success
Dec 21 11:16:54.561: INFO: Pod "pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:16:54.568: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 11:16:54.923: INFO: Waiting for pod pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:16:54.937: INFO: Pod pod-secrets-5cfafaa2-23e3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:16:54.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-c778j" for this suite.
Dec 21 11:17:05.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:17:05.403: INFO: namespace: e2e-tests-secrets-c778j, resource: bindings, ignored listing per whitelist
Dec 21 11:17:05.404: INFO: namespace e2e-tests-secrets-c778j deletion completed in 10.305142248s

• [SLOW TEST:24.270 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:17:05.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-ngpsq
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-ngpsq
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-ngpsq
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-ngpsq
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-ngpsq
Dec 21 11:17:21.679: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ngpsq, name: ss-0, uid: 74ba009b-23e3-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Dec 21 11:17:22.083: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ngpsq, name: ss-0, uid: 74ba009b-23e3-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 21 11:17:22.129: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-ngpsq, name: ss-0, uid: 74ba009b-23e3-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Dec 21 11:17:22.188: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-ngpsq
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-ngpsq
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-ngpsq and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 21 11:17:37.749: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ngpsq
Dec 21 11:17:37.752: INFO: Scaling statefulset ss to 0
Dec 21 11:17:47.823: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 11:17:47.827: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:17:47.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-ngpsq" for this suite.
Dec 21 11:17:56.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:17:56.798: INFO: namespace: e2e-tests-statefulset-ngpsq, resource: bindings, ignored listing per whitelist
Dec 21 11:17:56.998: INFO: namespace e2e-tests-statefulset-ngpsq deletion completed in 8.341916759s

• [SLOW TEST:51.594 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:17:56.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 11:17:57.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-jg9ct" to be "success or failure"
Dec 21 11:17:57.272: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.587949ms
Dec 21 11:18:00.738: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489500077s
Dec 21 11:18:02.762: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.513181581s
Dec 21 11:18:05.849: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.600767464s
Dec 21 11:18:07.897: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648524948s
Dec 21 11:18:10.164: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.915578835s
Dec 21 11:18:12.181: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.931810167s
STEP: Saw pod success
Dec 21 11:18:12.181: INFO: Pod "downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:18:12.189: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 11:18:12.964: INFO: Waiting for pod downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:18:12.992: INFO: Pod downwardapi-volume-8a41079f-23e3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:18:12.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jg9ct" for this suite.
Dec 21 11:18:19.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:18:19.252: INFO: namespace: e2e-tests-downward-api-jg9ct, resource: bindings, ignored listing per whitelist
Dec 21 11:18:19.276: INFO: namespace e2e-tests-downward-api-jg9ct deletion completed in 6.276645408s

• [SLOW TEST:22.277 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:18:19.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 11:18:19.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-rbscs" to be "success or failure"
Dec 21 11:18:19.572: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 65.547074ms
Dec 21 11:18:21.590: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083501918s
Dec 21 11:18:23.609: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103238154s
Dec 21 11:18:25.967: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46124071s
Dec 21 11:18:27.984: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477528125s
Dec 21 11:18:30.009: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.503046814s
Dec 21 11:18:32.418: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.91165941s
STEP: Saw pod success
Dec 21 11:18:32.418: INFO: Pod "downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:18:32.437: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 11:18:32.660: INFO: Waiting for pod downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:18:32.760: INFO: Pod downwardapi-volume-9786c424-23e3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:18:32.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rbscs" for this suite.
Dec 21 11:18:38.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:18:38.944: INFO: namespace: e2e-tests-downward-api-rbscs, resource: bindings, ignored listing per whitelist
Dec 21 11:18:39.038: INFO: namespace e2e-tests-downward-api-rbscs deletion completed in 6.213902836s

• [SLOW TEST:19.761 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:18:39.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 21 11:18:59.426: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:18:59.441: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:01.441: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:01.663: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:03.441: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:03.726: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:05.441: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:05.481: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:07.441: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:07.506: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:09.441: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:09.473: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:11.442: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:11.456: INFO: Pod pod-with-poststart-http-hook still exists
Dec 21 11:19:13.441: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 21 11:19:13.460: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:19:13.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8qpq7" for this suite.
Dec 21 11:19:37.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:19:37.710: INFO: namespace: e2e-tests-container-lifecycle-hook-8qpq7, resource: bindings, ignored listing per whitelist
Dec 21 11:19:37.749: INFO: namespace e2e-tests-container-lifecycle-hook-8qpq7 deletion completed in 24.282655013s

• [SLOW TEST:58.711 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:19:37.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:19:38.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-btdmt" for this suite.
Dec 21 11:19:44.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:19:44.138: INFO: namespace: e2e-tests-kubelet-test-btdmt, resource: bindings, ignored listing per whitelist
Dec 21 11:19:44.355: INFO: namespace e2e-tests-kubelet-test-btdmt deletion completed in 6.318954388s

• [SLOW TEST:6.605 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:19:44.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 21 11:19:44.609: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 21 11:19:49.629: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:19:51.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-495jk" for this suite.
Dec 21 11:20:06.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:20:06.884: INFO: namespace: e2e-tests-replication-controller-495jk, resource: bindings, ignored listing per whitelist
Dec 21 11:20:06.914: INFO: namespace e2e-tests-replication-controller-495jk deletion completed in 15.296990666s

• [SLOW TEST:22.559 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:20:06.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1221 11:20:09.816123       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 11:20:09.816: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:20:09.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bss88" for this suite.
Dec 21 11:20:16.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:20:16.940: INFO: namespace: e2e-tests-gc-bss88, resource: bindings, ignored listing per whitelist
Dec 21 11:20:16.940: INFO: namespace e2e-tests-gc-bss88 deletion completed in 7.046061728s

• [SLOW TEST:10.026 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:20:16.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-dda9f4e8-23e3-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 11:20:17.189: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-nkw2s" to be "success or failure"
Dec 21 11:20:17.202: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.723061ms
Dec 21 11:20:19.215: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025354465s
Dec 21 11:20:21.263: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07368277s
Dec 21 11:20:23.290: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100687672s
Dec 21 11:20:25.303: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113018269s
Dec 21 11:20:27.333: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143759653s
STEP: Saw pod success
Dec 21 11:20:27.333: INFO: Pod "pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:20:27.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 11:20:27.494: INFO: Waiting for pod pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:20:27.509: INFO: Pod pod-projected-configmaps-ddab1a9c-23e3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:20:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nkw2s" for this suite.
Dec 21 11:20:33.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:20:33.984: INFO: namespace: e2e-tests-projected-nkw2s, resource: bindings, ignored listing per whitelist
Dec 21 11:20:34.013: INFO: namespace e2e-tests-projected-nkw2s deletion completed in 6.49240254s

• [SLOW TEST:17.072 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:20:34.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:20:46.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-9c57d" for this suite.
Dec 21 11:20:52.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:20:52.807: INFO: namespace: e2e-tests-kubelet-test-9c57d, resource: bindings, ignored listing per whitelist
Dec 21 11:20:52.848: INFO: namespace e2e-tests-kubelet-test-9c57d deletion completed in 6.260928135s

• [SLOW TEST:18.835 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:20:52.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 11:20:53.070: INFO: Creating ReplicaSet my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005
Dec 21 11:20:53.147: INFO: Pod name my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005: Found 0 pods out of 1
Dec 21 11:20:58.172: INFO: Pod name my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005: Found 1 pods out of 1
Dec 21 11:20:58.172: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005" is running
Dec 21 11:21:02.197: INFO: Pod "my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005-6z8tp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 11:20:53 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 11:20:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 11:20:53 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 11:20:53 +0000 UTC Reason: Message:}])
Dec 21 11:21:02.197: INFO: Trying to dial the pod
Dec 21 11:21:07.277: INFO: Controller my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005: Got expected result from replica 1 [my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005-6z8tp]: "my-hostname-basic-f310695b-23e3-11ea-bbd3-0242ac110005-6z8tp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:21:07.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-rxfq2" for this suite.
Dec 21 11:21:13.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:21:13.509: INFO: namespace: e2e-tests-replicaset-rxfq2, resource: bindings, ignored listing per whitelist
Dec 21 11:21:13.555: INFO: namespace e2e-tests-replicaset-rxfq2 deletion completed in 6.226207486s

• [SLOW TEST:20.706 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:21:13.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Dec 21 11:21:26.015: INFO: Pod pod-hostip-ff71d568-23e3-11ea-bbd3-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:21:26.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-krpgj" for this suite.
Dec 21 11:21:50.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:21:50.153: INFO: namespace: e2e-tests-pods-krpgj, resource: bindings, ignored listing per whitelist
Dec 21 11:21:50.194: INFO: namespace e2e-tests-pods-krpgj deletion completed in 24.169269641s

• [SLOW TEST:36.639 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:21:50.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 21 11:21:50.566: INFO: Waiting up to 5m0s for pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-65dgl" to be "success or failure"
Dec 21 11:21:50.614: INFO: Pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.017897ms
Dec 21 11:21:52.701: INFO: Pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134364574s
Dec 21 11:21:54.712: INFO: Pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145777692s
Dec 21 11:21:56.722: INFO: Pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15580415s
Dec 21 11:21:58.760: INFO: Pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.193197451s
STEP: Saw pod success
Dec 21 11:21:58.760: INFO: Pod "pod-1544793a-23e4-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:21:58.913: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1544793a-23e4-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 11:21:59.079: INFO: Waiting for pod pod-1544793a-23e4-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:21:59.091: INFO: Pod pod-1544793a-23e4-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:21:59.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-65dgl" for this suite.
Dec 21 11:22:05.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:22:05.304: INFO: namespace: e2e-tests-emptydir-65dgl, resource: bindings, ignored listing per whitelist
Dec 21 11:22:05.452: INFO: namespace e2e-tests-emptydir-65dgl deletion completed in 6.351110533s

• [SLOW TEST:15.258 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:22:05.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 21 11:25:09.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:09.172: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:11.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:11.190: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:13.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:13.202: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:15.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:15.193: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:17.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:17.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:19.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:19.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:21.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:21.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:23.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:23.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:25.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:25.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:27.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:27.183: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:29.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:29.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:31.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:31.190: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:33.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:33.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:35.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:35.187: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:37.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:37.246: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:39.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:39.225: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:41.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:41.185: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:43.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:43.203: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:45.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:45.209: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:47.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:47.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:49.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:49.183: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:51.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:51.226: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:53.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:53.184: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:55.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:55.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:57.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:57.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:25:59.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:25:59.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:01.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:01.193: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:03.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:03.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:05.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:05.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:07.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:07.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:09.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:09.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:11.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:11.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:13.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:13.187: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:15.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:15.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:17.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:17.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:19.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:19.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:21.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:21.202: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:23.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:23.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:25.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:25.190: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:27.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:27.193: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:29.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:29.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:31.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:31.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:33.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:33.218: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:35.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:35.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:37.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:37.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:39.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:39.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:41.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:41.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:43.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:43.275: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:45.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:45.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:47.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:47.191: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:49.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:49.189: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:51.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:51.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:53.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:53.190: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:55.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:55.478: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:57.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:57.642: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:26:59.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:26:59.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:27:01.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:27:01.188: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 21 11:27:03.173: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 21 11:27:03.189: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:27:03.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pnrg8" for this suite.
Dec 21 11:27:27.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:27:27.348: INFO: namespace: e2e-tests-container-lifecycle-hook-pnrg8, resource: bindings, ignored listing per whitelist
Dec 21 11:27:27.572: INFO: namespace e2e-tests-container-lifecycle-hook-pnrg8 deletion completed in 24.376059028s

• [SLOW TEST:322.119 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:27:27.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 21 11:27:27.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x2rtz,SelfLink:/api/v1/namespaces/e2e-tests-watch-x2rtz/configmaps/e2e-watch-test-label-changed,UID:de578e2b-23e4-11ea-a994-fa163e34d433,ResourceVersion:15559574,Generation:0,CreationTimestamp:2019-12-21 11:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 21 11:27:27.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x2rtz,SelfLink:/api/v1/namespaces/e2e-tests-watch-x2rtz/configmaps/e2e-watch-test-label-changed,UID:de578e2b-23e4-11ea-a994-fa163e34d433,ResourceVersion:15559575,Generation:0,CreationTimestamp:2019-12-21 11:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 21 11:27:27.944: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x2rtz,SelfLink:/api/v1/namespaces/e2e-tests-watch-x2rtz/configmaps/e2e-watch-test-label-changed,UID:de578e2b-23e4-11ea-a994-fa163e34d433,ResourceVersion:15559576,Generation:0,CreationTimestamp:2019-12-21 11:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 21 11:27:38.034: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x2rtz,SelfLink:/api/v1/namespaces/e2e-tests-watch-x2rtz/configmaps/e2e-watch-test-label-changed,UID:de578e2b-23e4-11ea-a994-fa163e34d433,ResourceVersion:15559590,Generation:0,CreationTimestamp:2019-12-21 11:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 11:27:38.034: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x2rtz,SelfLink:/api/v1/namespaces/e2e-tests-watch-x2rtz/configmaps/e2e-watch-test-label-changed,UID:de578e2b-23e4-11ea-a994-fa163e34d433,ResourceVersion:15559591,Generation:0,CreationTimestamp:2019-12-21 11:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 21 11:27:38.034: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-x2rtz,SelfLink:/api/v1/namespaces/e2e-tests-watch-x2rtz/configmaps/e2e-watch-test-label-changed,UID:de578e2b-23e4-11ea-a994-fa163e34d433,ResourceVersion:15559592,Generation:0,CreationTimestamp:2019-12-21 11:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:27:38.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-x2rtz" for this suite.
Dec 21 11:27:44.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:27:44.222: INFO: namespace: e2e-tests-watch-x2rtz, resource: bindings, ignored listing per whitelist
Dec 21 11:27:44.245: INFO: namespace e2e-tests-watch-x2rtz deletion completed in 6.200981679s

• [SLOW TEST:16.672 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:27:44.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e864a752-23e4-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 11:27:44.873: INFO: Waiting up to 5m0s for pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-l754r" to be "success or failure"
Dec 21 11:27:44.973: INFO: Pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.702555ms
Dec 21 11:27:46.981: INFO: Pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108499928s
Dec 21 11:27:49.015: INFO: Pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141674819s
Dec 21 11:27:51.086: INFO: Pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213521416s
Dec 21 11:27:53.106: INFO: Pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.232960681s
STEP: Saw pod success
Dec 21 11:27:53.106: INFO: Pod "pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:27:53.112: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 11:27:53.238: INFO: Waiting for pod pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:27:53.252: INFO: Pod pod-secrets-e87ef1fc-23e4-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:27:53.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-l754r" for this suite.
Dec 21 11:27:59.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:27:59.400: INFO: namespace: e2e-tests-secrets-l754r, resource: bindings, ignored listing per whitelist
Dec 21 11:27:59.594: INFO: namespace e2e-tests-secrets-l754r deletion completed in 6.335396916s
STEP: Destroying namespace "e2e-tests-secret-namespace-sglss" for this suite.
Dec 21 11:28:05.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:28:05.820: INFO: namespace: e2e-tests-secret-namespace-sglss, resource: bindings, ignored listing per whitelist
Dec 21 11:28:05.850: INFO: namespace e2e-tests-secret-namespace-sglss deletion completed in 6.255497351s

• [SLOW TEST:21.605 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:28:05.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1221 11:28:36.772260       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 11:28:36.772: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:28:36.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hpjnv" for this suite.
Dec 21 11:28:47.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:28:48.010: INFO: namespace: e2e-tests-gc-hpjnv, resource: bindings, ignored listing per whitelist
Dec 21 11:28:48.059: INFO: namespace e2e-tests-gc-hpjnv deletion completed in 11.282818481s

• [SLOW TEST:42.208 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:28:48.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mpr9p
Dec 21 11:28:56.943: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mpr9p
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 11:28:56.951: INFO: Initial restart count of pod liveness-http is 0
Dec 21 11:29:23.411: INFO: Restart count of pod e2e-tests-container-probe-mpr9p/liveness-http is now 1 (26.459971329s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:29:23.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mpr9p" for this suite.
Dec 21 11:29:29.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:29:29.749: INFO: namespace: e2e-tests-container-probe-mpr9p, resource: bindings, ignored listing per whitelist
Dec 21 11:29:29.819: INFO: namespace e2e-tests-container-probe-mpr9p deletion completed in 6.246367585s

• [SLOW TEST:41.761 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:29:29.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 21 11:29:30.101: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 21 11:29:30.111: INFO: Waiting for terminating namespaces to be deleted...
Dec 21 11:29:30.114: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 21 11:29:30.130: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 11:29:30.130: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 21 11:29:30.130: INFO: 	Container coredns ready: true, restart count 0
Dec 21 11:29:30.130: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 21 11:29:30.130: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 11:29:30.130: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 11:29:30.130: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 21 11:29:30.130: INFO: 	Container weave ready: true, restart count 0
Dec 21 11:29:30.130: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 11:29:30.130: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 21 11:29:30.130: INFO: 	Container coredns ready: true, restart count 0
Dec 21 11:29:30.130: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 11:29:30.130: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2c1ae360-23e5-11ea-bbd3-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-2c1ae360-23e5-11ea-bbd3-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2c1ae360-23e5-11ea-bbd3-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:29:48.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-82tgm" for this suite.
Dec 21 11:30:04.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:30:04.757: INFO: namespace: e2e-tests-sched-pred-82tgm, resource: bindings, ignored listing per whitelist
Dec 21 11:30:04.767: INFO: namespace e2e-tests-sched-pred-82tgm deletion completed in 16.198154485s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:34.947 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:30:04.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 21 11:30:23.266: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 11:30:23.291: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 11:30:25.292: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 11:30:25.316: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 11:30:27.292: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 11:30:27.340: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 11:30:29.292: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 11:30:29.452: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 11:30:31.292: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 11:30:31.334: INFO: Pod pod-with-prestop-http-hook still exists
Dec 21 11:30:33.292: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 21 11:30:33.311: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:30:33.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-n9pzf" for this suite.
Dec 21 11:30:57.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:30:57.529: INFO: namespace: e2e-tests-container-lifecycle-hook-n9pzf, resource: bindings, ignored listing per whitelist
Dec 21 11:30:57.577: INFO: namespace e2e-tests-container-lifecycle-hook-n9pzf deletion completed in 24.221725148s

• [SLOW TEST:52.809 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:30:57.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Dec 21 11:30:57.980: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:30:58.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w6p4r" for this suite.
Dec 21 11:31:04.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:31:04.365: INFO: namespace: e2e-tests-kubectl-w6p4r, resource: bindings, ignored listing per whitelist
Dec 21 11:31:04.427: INFO: namespace e2e-tests-kubectl-w6p4r deletion completed in 6.261778323s

• [SLOW TEST:6.849 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:31:04.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-5fa5b563-23e5-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 11:31:04.758: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-dlr9s" to be "success or failure"
Dec 21 11:31:04.880: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 121.74114ms
Dec 21 11:31:06.901: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142210339s
Dec 21 11:31:08.916: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158092696s
Dec 21 11:31:10.939: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180493822s
Dec 21 11:31:12.954: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195112747s
Dec 21 11:31:14.979: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.220827782s
STEP: Saw pod success
Dec 21 11:31:14.980: INFO: Pod "pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:31:14.986: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 11:31:15.194: INFO: Waiting for pod pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:31:15.223: INFO: Pod pod-configmaps-5fa6a6e4-23e5-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:31:15.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-dlr9s" for this suite.
Dec 21 11:31:23.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:31:23.413: INFO: namespace: e2e-tests-configmap-dlr9s, resource: bindings, ignored listing per whitelist
Dec 21 11:31:23.509: INFO: namespace e2e-tests-configmap-dlr9s deletion completed in 8.271539508s

• [SLOW TEST:19.082 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:31:23.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-lx8p8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lx8p8 to expose endpoints map[]
Dec 21 11:31:23.954: INFO: Get endpoints failed (70.800026ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 21 11:31:24.975: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lx8p8 exposes endpoints map[] (1.091856917s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-lx8p8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lx8p8 to expose endpoints map[pod1:[100]]
Dec 21 11:31:29.734: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.733140099s elapsed, will retry)
Dec 21 11:31:34.483: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lx8p8 exposes endpoints map[pod1:[100]] (9.481734722s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-lx8p8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lx8p8 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 21 11:31:39.546: INFO: Unexpected endpoints: found map[6bb7fb8a-23e5-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.029047195s elapsed, will retry)
Dec 21 11:31:42.647: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lx8p8 exposes endpoints map[pod1:[100] pod2:[101]] (8.129580743s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-lx8p8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lx8p8 to expose endpoints map[pod2:[101]]
Dec 21 11:31:42.714: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lx8p8 exposes endpoints map[pod2:[101]] (40.332182ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-lx8p8
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-lx8p8 to expose endpoints map[]
Dec 21 11:31:44.040: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-lx8p8 exposes endpoints map[] (1.23776031s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:31:44.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-lx8p8" for this suite.
Dec 21 11:32:08.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:32:08.762: INFO: namespace: e2e-tests-services-lx8p8, resource: bindings, ignored listing per whitelist
Dec 21 11:32:08.818: INFO: namespace e2e-tests-services-lx8p8 deletion completed in 24.390803014s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:45.307 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:32:08.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 11:32:09.021: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:32:10.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-bzfvc" for this suite.
Dec 21 11:32:16.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:32:16.673: INFO: namespace: e2e-tests-custom-resource-definition-bzfvc, resource: bindings, ignored listing per whitelist
Dec 21 11:32:16.712: INFO: namespace e2e-tests-custom-resource-definition-bzfvc deletion completed in 6.476054448s

• [SLOW TEST:7.894 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:32:16.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Dec 21 11:32:16.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:20.231: INFO: stderr: ""
Dec 21 11:32:20.232: INFO: stdout: "pod/pause created\n"
Dec 21 11:32:20.232: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 21 11:32:20.232: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-cnwlc" to be "running and ready"
Dec 21 11:32:20.430: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 197.546003ms
Dec 21 11:32:22.458: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225719792s
Dec 21 11:32:24.998: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.76616431s
Dec 21 11:32:27.013: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.781305212s
Dec 21 11:32:29.027: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.795349696s
Dec 21 11:32:29.028: INFO: Pod "pause" satisfied condition "running and ready"
Dec 21 11:32:29.028: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 21 11:32:29.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:29.265: INFO: stderr: ""
Dec 21 11:32:29.265: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 21 11:32:29.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:29.443: INFO: stderr: ""
Dec 21 11:32:29.443: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 21 11:32:29.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:29.577: INFO: stderr: ""
Dec 21 11:32:29.577: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 21 11:32:29.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:29.670: INFO: stderr: ""
Dec 21 11:32:29.670: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Dec 21 11:32:29.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:30.000: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 11:32:30.001: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 21 11:32:30.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-cnwlc'
Dec 21 11:32:30.151: INFO: stderr: "No resources found.\n"
Dec 21 11:32:30.151: INFO: stdout: ""
Dec 21 11:32:30.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-cnwlc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 11:32:30.338: INFO: stderr: ""
Dec 21 11:32:30.338: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:32:30.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cnwlc" for this suite.
Dec 21 11:32:36.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:32:36.590: INFO: namespace: e2e-tests-kubectl-cnwlc, resource: bindings, ignored listing per whitelist
Dec 21 11:32:36.629: INFO: namespace e2e-tests-kubectl-cnwlc deletion completed in 6.269243144s

• [SLOW TEST:19.915 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:32:36.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 21 11:32:36.751: INFO: Waiting up to 5m0s for pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-q75sl" to be "success or failure"
Dec 21 11:32:36.876: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 124.990831ms
Dec 21 11:32:38.897: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146385674s
Dec 21 11:32:40.942: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190621421s
Dec 21 11:32:43.003: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252345467s
Dec 21 11:32:45.028: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277062455s
Dec 21 11:32:47.263: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.511700936s
STEP: Saw pod success
Dec 21 11:32:47.263: INFO: Pod "downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:32:47.271: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 11:32:47.341: INFO: Waiting for pod downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:32:47.350: INFO: Pod downward-api-967bcc89-23e5-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:32:47.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-q75sl" for this suite.
Dec 21 11:32:53.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:32:53.570: INFO: namespace: e2e-tests-downward-api-q75sl, resource: bindings, ignored listing per whitelist
Dec 21 11:32:53.703: INFO: namespace e2e-tests-downward-api-q75sl deletion completed in 6.212436302s

• [SLOW TEST:17.074 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:32:53.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-lbc8b/configmap-test-a0d1dc0c-23e5-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 11:32:54.273: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-lbc8b" to be "success or failure"
Dec 21 11:32:54.414: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 140.386498ms
Dec 21 11:32:56.624: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350541245s
Dec 21 11:32:58.653: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.379635361s
Dec 21 11:33:01.050: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.776803174s
Dec 21 11:33:03.409: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.135166882s
Dec 21 11:33:05.424: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.150223976s
STEP: Saw pod success
Dec 21 11:33:05.424: INFO: Pod "pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:33:05.433: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005 container env-test: 
STEP: delete the pod
Dec 21 11:33:05.996: INFO: Waiting for pod pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:33:06.318: INFO: Pod pod-configmaps-a0e20f34-23e5-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:33:06.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lbc8b" for this suite.
Dec 21 11:33:12.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:33:12.853: INFO: namespace: e2e-tests-configmap-lbc8b, resource: bindings, ignored listing per whitelist
Dec 21 11:33:12.945: INFO: namespace e2e-tests-configmap-lbc8b deletion completed in 6.610563105s

• [SLOW TEST:19.241 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:33:12.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Dec 21 11:33:13.119: INFO: Waiting up to 5m0s for pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005" in namespace "e2e-tests-containers-rcrdl" to be "success or failure"
Dec 21 11:33:13.132: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.338383ms
Dec 21 11:33:15.141: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02199052s
Dec 21 11:33:17.156: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036895282s
Dec 21 11:33:19.254: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13495627s
Dec 21 11:33:21.659: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54008057s
Dec 21 11:33:23.665: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.546096117s
STEP: Saw pod success
Dec 21 11:33:23.665: INFO: Pod "client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:33:23.668: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 11:33:23.824: INFO: Waiting for pod client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:33:23.830: INFO: Pod client-containers-ac28fddc-23e5-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:33:23.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-rcrdl" for this suite.
Dec 21 11:33:30.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:33:30.707: INFO: namespace: e2e-tests-containers-rcrdl, resource: bindings, ignored listing per whitelist
Dec 21 11:33:30.782: INFO: namespace e2e-tests-containers-rcrdl deletion completed in 6.947176826s

• [SLOW TEST:17.837 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:33:30.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1221 11:33:47.157887       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 11:33:47.158: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:33:47.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-98bqb" for this suite.
Dec 21 11:34:08.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:34:08.241: INFO: namespace: e2e-tests-gc-98bqb, resource: bindings, ignored listing per whitelist
Dec 21 11:34:08.243: INFO: namespace e2e-tests-gc-98bqb deletion completed in 20.743768408s

• [SLOW TEST:37.461 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:34:08.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-q6h9
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 11:34:08.508: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q6h9" in namespace "e2e-tests-subpath-jf9x4" to be "success or failure"
Dec 21 11:34:08.517: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.998944ms
Dec 21 11:34:10.575: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066859103s
Dec 21 11:34:12.600: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091767258s
Dec 21 11:34:14.640: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131312917s
Dec 21 11:34:16.698: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189237284s
Dec 21 11:34:18.710: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.201544221s
Dec 21 11:34:21.604: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.095064433s
Dec 21 11:34:23.622: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.11340587s
Dec 21 11:34:25.641: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 17.132535158s
Dec 21 11:34:27.660: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 19.151560473s
Dec 21 11:34:29.677: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 21.168564373s
Dec 21 11:34:31.686: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 23.177055943s
Dec 21 11:34:33.704: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 25.194944567s
Dec 21 11:34:35.721: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 27.21277137s
Dec 21 11:34:37.739: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 29.230157806s
Dec 21 11:34:39.766: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 31.257290341s
Dec 21 11:34:41.775: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Running", Reason="", readiness=false. Elapsed: 33.26625406s
Dec 21 11:34:43.794: INFO: Pod "pod-subpath-test-configmap-q6h9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.285657383s
STEP: Saw pod success
Dec 21 11:34:43.794: INFO: Pod "pod-subpath-test-configmap-q6h9" satisfied condition "success or failure"
Dec 21 11:34:43.800: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-q6h9 container test-container-subpath-configmap-q6h9: 
STEP: delete the pod
Dec 21 11:34:44.092: INFO: Waiting for pod pod-subpath-test-configmap-q6h9 to disappear
Dec 21 11:34:44.112: INFO: Pod pod-subpath-test-configmap-q6h9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-q6h9
Dec 21 11:34:44.113: INFO: Deleting pod "pod-subpath-test-configmap-q6h9" in namespace "e2e-tests-subpath-jf9x4"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:34:44.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jf9x4" for this suite.
Dec 21 11:34:52.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:34:52.686: INFO: namespace: e2e-tests-subpath-jf9x4, resource: bindings, ignored listing per whitelist
Dec 21 11:34:52.690: INFO: namespace e2e-tests-subpath-jf9x4 deletion completed in 8.510151565s

• [SLOW TEST:44.446 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:34:52.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 21 11:35:01.491: INFO: Successfully updated pod "annotationupdatee7a242f1-23e5-11ea-bbd3-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:35:03.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vs2zk" for this suite.
Dec 21 11:35:27.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:35:27.928: INFO: namespace: e2e-tests-downward-api-vs2zk, resource: bindings, ignored listing per whitelist
Dec 21 11:35:27.943: INFO: namespace e2e-tests-downward-api-vs2zk deletion completed in 24.259617234s

• [SLOW TEST:35.252 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:35:27.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 21 11:35:28.113: INFO: Waiting up to 5m0s for pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-5xsp7" to be "success or failure"
Dec 21 11:35:28.160: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.221721ms
Dec 21 11:35:30.273: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159449002s
Dec 21 11:35:32.287: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173489916s
Dec 21 11:35:34.323: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209766599s
Dec 21 11:35:36.338: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22422038s
Dec 21 11:35:38.352: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.238458053s
STEP: Saw pod success
Dec 21 11:35:38.352: INFO: Pod "pod-fc9ed760-23e5-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:35:38.357: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fc9ed760-23e5-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 11:35:38.758: INFO: Waiting for pod pod-fc9ed760-23e5-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:35:38.764: INFO: Pod pod-fc9ed760-23e5-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:35:38.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5xsp7" for this suite.
Dec 21 11:35:44.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:35:44.826: INFO: namespace: e2e-tests-emptydir-5xsp7, resource: bindings, ignored listing per whitelist
Dec 21 11:35:44.984: INFO: namespace e2e-tests-emptydir-5xsp7 deletion completed in 6.214062584s

• [SLOW TEST:17.041 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:35:44.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-8lsf
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 11:35:45.202: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8lsf" in namespace "e2e-tests-subpath-jgfhq" to be "success or failure"
Dec 21 11:35:45.227: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.358321ms
Dec 21 11:35:47.245: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043297318s
Dec 21 11:35:49.258: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055690298s
Dec 21 11:35:51.313: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111090305s
Dec 21 11:35:53.331: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129630146s
Dec 21 11:35:55.351: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.148677873s
Dec 21 11:35:58.292: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.090009164s
Dec 21 11:36:00.306: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.104419661s
Dec 21 11:36:02.318: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 17.116391924s
Dec 21 11:36:04.340: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 19.137834697s
Dec 21 11:36:06.360: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 21.158558801s
Dec 21 11:36:08.395: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 23.193357809s
Dec 21 11:36:10.413: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 25.211574567s
Dec 21 11:36:12.429: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 27.227092319s
Dec 21 11:36:14.456: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 29.253794684s
Dec 21 11:36:16.479: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 31.276679919s
Dec 21 11:36:18.513: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Running", Reason="", readiness=false. Elapsed: 33.311435358s
Dec 21 11:36:20.567: INFO: Pod "pod-subpath-test-downwardapi-8lsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.365560447s
STEP: Saw pod success
Dec 21 11:36:20.568: INFO: Pod "pod-subpath-test-downwardapi-8lsf" satisfied condition "success or failure"
Dec 21 11:36:20.583: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-8lsf container test-container-subpath-downwardapi-8lsf: 
STEP: delete the pod
Dec 21 11:36:20.800: INFO: Waiting for pod pod-subpath-test-downwardapi-8lsf to disappear
Dec 21 11:36:20.859: INFO: Pod pod-subpath-test-downwardapi-8lsf no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-8lsf
Dec 21 11:36:20.859: INFO: Deleting pod "pod-subpath-test-downwardapi-8lsf" in namespace "e2e-tests-subpath-jgfhq"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:36:20.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jgfhq" for this suite.
Dec 21 11:36:29.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:36:29.202: INFO: namespace: e2e-tests-subpath-jgfhq, resource: bindings, ignored listing per whitelist
Dec 21 11:36:29.332: INFO: namespace e2e-tests-subpath-jgfhq deletion completed in 8.348618306s

• [SLOW TEST:44.348 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:36:29.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 11:36:29.625: INFO: Waiting up to 5m0s for pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-4ddxc" to be "success or failure"
Dec 21 11:36:29.685: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 59.912374ms
Dec 21 11:36:31.943: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317894069s
Dec 21 11:36:33.995: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369116255s
Dec 21 11:36:36.007: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381072736s
Dec 21 11:36:38.176: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550375698s
Dec 21 11:36:40.424: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798473899s
STEP: Saw pod success
Dec 21 11:36:40.424: INFO: Pod "downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:36:40.466: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 11:36:40.664: INFO: Waiting for pod downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:36:40.758: INFO: Pod downwardapi-volume-214640bd-23e6-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:36:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4ddxc" for this suite.
Dec 21 11:36:46.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:36:46.942: INFO: namespace: e2e-tests-projected-4ddxc, resource: bindings, ignored listing per whitelist
Dec 21 11:36:46.955: INFO: namespace e2e-tests-projected-4ddxc deletion completed in 6.188556618s

• [SLOW TEST:17.622 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:36:46.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 21 11:36:47.232: INFO: Waiting up to 5m0s for pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-c7vm9" to be "success or failure"
Dec 21 11:36:47.249: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.88265ms
Dec 21 11:36:49.267: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034060509s
Dec 21 11:36:51.276: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043952782s
Dec 21 11:36:53.372: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139334742s
Dec 21 11:36:55.411: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177950199s
Dec 21 11:36:57.434: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2012755s
STEP: Saw pod success
Dec 21 11:36:57.435: INFO: Pod "downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:36:57.448: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 11:36:58.476: INFO: Waiting for pod downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:36:58.512: INFO: Pod downward-api-2bc8339d-23e6-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:36:58.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-c7vm9" for this suite.
Dec 21 11:37:06.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:37:06.684: INFO: namespace: e2e-tests-downward-api-c7vm9, resource: bindings, ignored listing per whitelist
Dec 21 11:37:06.797: INFO: namespace e2e-tests-downward-api-c7vm9 deletion completed in 8.264650694s

• [SLOW TEST:19.842 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:37:06.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rbhx9
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 11:37:07.002: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 11:37:49.241: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-rbhx9 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 11:37:49.241: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 11:37:49.692: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:37:49.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rbhx9" for this suite.
Dec 21 11:38:13.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:38:13.818: INFO: namespace: e2e-tests-pod-network-test-rbhx9, resource: bindings, ignored listing per whitelist
Dec 21 11:38:14.093: INFO: namespace e2e-tests-pod-network-test-rbhx9 deletion completed in 24.383405863s

• [SLOW TEST:67.295 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:38:14.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 21 11:38:14.406: INFO: Waiting up to 5m0s for pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-w26f4" to be "success or failure"
Dec 21 11:38:14.417: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129492ms
Dec 21 11:38:16.445: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038004747s
Dec 21 11:38:18.462: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055334843s
Dec 21 11:38:20.867: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460061151s
Dec 21 11:38:22.883: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476375193s
Dec 21 11:38:24.904: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.496971492s
STEP: Saw pod success
Dec 21 11:38:24.904: INFO: Pod "pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:38:24.907: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 11:38:25.289: INFO: Waiting for pod pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:38:25.309: INFO: Pod pod-5fbcb4e0-23e6-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:38:25.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-w26f4" for this suite.
Dec 21 11:38:32.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:38:33.199: INFO: namespace: e2e-tests-emptydir-w26f4, resource: bindings, ignored listing per whitelist
Dec 21 11:38:33.228: INFO: namespace e2e-tests-emptydir-w26f4 deletion completed in 7.912559265s

• [SLOW TEST:19.134 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:38:33.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 11:38:33.537: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Dec 21 11:38:33.584: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-h8d98/daemonsets","resourceVersion":"15561162"},"items":null}

Dec 21 11:38:33.590: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-h8d98/pods","resourceVersion":"15561162"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:38:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-h8d98" for this suite.
Dec 21 11:38:39.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:38:39.754: INFO: namespace: e2e-tests-daemonsets-h8d98, resource: bindings, ignored listing per whitelist
Dec 21 11:38:39.861: INFO: namespace e2e-tests-daemonsets-h8d98 deletion completed in 6.221244307s

S [SKIPPING] [6.632 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Dec 21 11:38:33.537: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:38:39.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-4lgtg
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-4lgtg
STEP: Deleting pre-stop pod
Dec 21 11:39:03.444: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:39:03.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-4lgtg" for this suite.
Dec 21 11:39:43.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:39:43.712: INFO: namespace: e2e-tests-prestop-4lgtg, resource: bindings, ignored listing per whitelist
Dec 21 11:39:44.107: INFO: namespace e2e-tests-prestop-4lgtg deletion completed in 40.547083614s

• [SLOW TEST:64.246 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:39:44.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 21 11:39:44.444: INFO: Waiting up to 5m0s for pod "pod-95641467-23e6-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-rwwd4" to be "success or failure"
Dec 21 11:39:44.455: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542771ms
Dec 21 11:39:46.694: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249613468s
Dec 21 11:39:48.712: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268044612s
Dec 21 11:39:50.892: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447492366s
Dec 21 11:39:52.933: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48880793s
Dec 21 11:39:54.953: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.509046376s
STEP: Saw pod success
Dec 21 11:39:54.954: INFO: Pod "pod-95641467-23e6-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:39:54.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-95641467-23e6-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 11:39:56.029: INFO: Waiting for pod pod-95641467-23e6-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:39:56.376: INFO: Pod pod-95641467-23e6-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:39:56.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rwwd4" for this suite.
Dec 21 11:40:02.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:40:02.493: INFO: namespace: e2e-tests-emptydir-rwwd4, resource: bindings, ignored listing per whitelist
Dec 21 11:40:02.710: INFO: namespace e2e-tests-emptydir-rwwd4 deletion completed in 6.319082197s

• [SLOW TEST:18.601 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:40:02.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 11:40:02.918: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-brp29" to be "success or failure"
Dec 21 11:40:02.938: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.778627ms
Dec 21 11:40:04.950: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031594627s
Dec 21 11:40:06.963: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043954799s
Dec 21 11:40:08.990: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071120885s
Dec 21 11:40:11.424: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.505038269s
Dec 21 11:40:13.558: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639357462s
STEP: Saw pod success
Dec 21 11:40:13.558: INFO: Pod "downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:40:13.572: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 11:40:13.720: INFO: Waiting for pod downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:40:13.732: INFO: Pod downwardapi-volume-a060d650-23e6-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:40:13.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-brp29" for this suite.
Dec 21 11:40:21.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:40:21.980: INFO: namespace: e2e-tests-projected-brp29, resource: bindings, ignored listing per whitelist
Dec 21 11:40:22.076: INFO: namespace e2e-tests-projected-brp29 deletion completed in 8.337881588s

• [SLOW TEST:19.366 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:40:22.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-z7bz6
Dec 21 11:40:30.322: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-z7bz6
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 11:40:30.326: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:44:32.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-z7bz6" for this suite.
Dec 21 11:44:40.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:44:41.117: INFO: namespace: e2e-tests-container-probe-z7bz6, resource: bindings, ignored listing per whitelist
Dec 21 11:44:41.153: INFO: namespace e2e-tests-container-probe-z7bz6 deletion completed in 8.451691596s

• [SLOW TEST:259.077 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:44:41.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:44:51.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-4lx5p" for this suite.
Dec 21 11:45:33.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:45:33.649: INFO: namespace: e2e-tests-kubelet-test-4lx5p, resource: bindings, ignored listing per whitelist
Dec 21 11:45:33.789: INFO: namespace e2e-tests-kubelet-test-4lx5p deletion completed in 42.225026214s

• [SLOW TEST:52.636 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:45:33.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-65e5c388-23e7-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 11:45:34.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-h5nwn" to be "success or failure"
Dec 21 11:45:34.439: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.62478ms
Dec 21 11:45:36.448: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026802831s
Dec 21 11:45:38.478: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056482175s
Dec 21 11:45:40.513: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091830903s
Dec 21 11:45:42.560: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13891664s
Dec 21 11:45:44.629: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.2071602s
STEP: Saw pod success
Dec 21 11:45:44.629: INFO: Pod "pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:45:44.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 11:45:45.724: INFO: Waiting for pod pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:45:46.032: INFO: Pod pod-configmaps-65eaf9ca-23e7-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:45:46.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h5nwn" for this suite.
Dec 21 11:45:52.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:45:52.243: INFO: namespace: e2e-tests-configmap-h5nwn, resource: bindings, ignored listing per whitelist
Dec 21 11:45:52.393: INFO: namespace e2e-tests-configmap-h5nwn deletion completed in 6.339487736s

• [SLOW TEST:18.604 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:45:52.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-70f2d759-23e7-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 11:45:52.778: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-26rbl" to be "success or failure"
Dec 21 11:45:52.785: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.289515ms
Dec 21 11:45:55.229: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451245644s
Dec 21 11:45:57.260: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481578994s
Dec 21 11:45:59.622: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.843728968s
Dec 21 11:46:01.629: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.850868015s
Dec 21 11:46:03.643: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.864403411s
STEP: Saw pod success
Dec 21 11:46:03.643: INFO: Pod "pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:46:03.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 11:46:04.568: INFO: Waiting for pod pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:46:04.651: INFO: Pod pod-projected-configmaps-70f3ade2-23e7-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:46:04.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-26rbl" for this suite.
Dec 21 11:46:10.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:46:10.849: INFO: namespace: e2e-tests-projected-26rbl, resource: bindings, ignored listing per whitelist
Dec 21 11:46:10.981: INFO: namespace e2e-tests-projected-26rbl deletion completed in 6.311732053s

• [SLOW TEST:18.587 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:46:10.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 21 11:46:11.364: INFO: Waiting up to 5m0s for pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-v4snz" to be "success or failure"
Dec 21 11:46:11.387: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.091402ms
Dec 21 11:46:13.418: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054398706s
Dec 21 11:46:15.441: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077204104s
Dec 21 11:46:17.803: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438956934s
Dec 21 11:46:19.828: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.464638427s
Dec 21 11:46:21.855: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.490878249s
STEP: Saw pod success
Dec 21 11:46:21.855: INFO: Pod "pod-7c061404-23e7-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:46:21.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7c061404-23e7-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 11:46:22.312: INFO: Waiting for pod pod-7c061404-23e7-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:46:22.570: INFO: Pod pod-7c061404-23e7-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:46:22.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-v4snz" for this suite.
Dec 21 11:46:28.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:46:28.731: INFO: namespace: e2e-tests-emptydir-v4snz, resource: bindings, ignored listing per whitelist
Dec 21 11:46:28.810: INFO: namespace e2e-tests-emptydir-v4snz deletion completed in 6.224299335s

• [SLOW TEST:17.829 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:46:28.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Dec 21 11:46:29.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 21 11:46:29.208: INFO: stderr: ""
Dec 21 11:46:29.208: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:46:29.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6g6lx" for this suite.
Dec 21 11:46:35.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:46:35.384: INFO: namespace: e2e-tests-kubectl-6g6lx, resource: bindings, ignored listing per whitelist
Dec 21 11:46:35.395: INFO: namespace e2e-tests-kubectl-6g6lx deletion completed in 6.174454286s

• [SLOW TEST:6.584 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:46:35.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:46:35.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-njkwr" for this suite.
Dec 21 11:46:59.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:46:59.938: INFO: namespace: e2e-tests-pods-njkwr, resource: bindings, ignored listing per whitelist
Dec 21 11:47:00.024: INFO: namespace e2e-tests-pods-njkwr deletion completed in 24.435651566s

• [SLOW TEST:24.629 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:47:00.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1221 11:47:42.137615       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 11:47:42.138: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:47:42.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-dzgwj" for this suite.
Dec 21 11:48:08.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:48:08.301: INFO: namespace: e2e-tests-gc-dzgwj, resource: bindings, ignored listing per whitelist
Dec 21 11:48:08.450: INFO: namespace e2e-tests-gc-dzgwj deletion completed in 26.220903915s

• [SLOW TEST:68.426 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:48:08.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Dec 21 11:48:09.171: INFO: Waiting up to 5m0s for pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005" in namespace "e2e-tests-var-expansion-g6pwg" to be "success or failure"
Dec 21 11:48:09.313: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 141.097167ms
Dec 21 11:48:11.327: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155287761s
Dec 21 11:48:13.349: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177444587s
Dec 21 11:48:15.648: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476905053s
Dec 21 11:48:17.662: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490028283s
Dec 21 11:48:20.075: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.903064346s
STEP: Saw pod success
Dec 21 11:48:20.075: INFO: Pod "var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 11:48:20.093: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 11:48:20.332: INFO: Waiting for pod var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005 to disappear
Dec 21 11:48:20.346: INFO: Pod var-expansion-c23d65b6-23e7-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:48:20.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-g6pwg" for this suite.
Dec 21 11:48:26.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:48:26.706: INFO: namespace: e2e-tests-var-expansion-g6pwg, resource: bindings, ignored listing per whitelist
Dec 21 11:48:26.737: INFO: namespace e2e-tests-var-expansion-g6pwg deletion completed in 6.367551232s

• [SLOW TEST:18.286 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:48:26.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 11:48:27.015: INFO: Creating deployment "test-recreate-deployment"
Dec 21 11:48:27.063: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 21 11:48:27.074: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Dec 21 11:48:29.092: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 21 11:48:29.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 11:48:31.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 11:48:33.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 11:48:35.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712525707, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 11:48:37.109: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 21 11:48:37.134: INFO: Updating deployment test-recreate-deployment
Dec 21 11:48:37.134: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 21 11:48:39.817: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-b2jtx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2jtx/deployments/test-recreate-deployment,UID:cce547dc-23e7-11ea-a994-fa163e34d433,ResourceVersion:15562357,Generation:2,CreationTimestamp:2019-12-21 11:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-21 11:48:37 +0000 UTC 2019-12-21 11:48:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-21 11:48:39 +0000 UTC 2019-12-21 11:48:27 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 21 11:48:39.829: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-b2jtx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2jtx/replicasets/test-recreate-deployment-589c4bfd,UID:d31740bc-23e7-11ea-a994-fa163e34d433,ResourceVersion:15562354,Generation:1,CreationTimestamp:2019-12-21 11:48:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment cce547dc-23e7-11ea-a994-fa163e34d433 0xc0010496ef 0xc001049730}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 11:48:39.829: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 21 11:48:39.829: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-b2jtx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2jtx/replicasets/test-recreate-deployment-5bf7f65dc,UID:cceea991-23e7-11ea-a994-fa163e34d433,ResourceVersion:15562343,Generation:2,CreationTimestamp:2019-12-21 11:48:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment cce547dc-23e7-11ea-a994-fa163e34d433 0xc001049ab0 0xc001049ab1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 11:48:40.491: INFO: Pod "test-recreate-deployment-589c4bfd-ccqms" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ccqms,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-b2jtx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b2jtx/pods/test-recreate-deployment-589c4bfd-ccqms,UID:d31bb139-23e7-11ea-a994-fa163e34d433,ResourceVersion:15562355,Generation:0,CreationTimestamp:2019-12-21 11:48:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd d31740bc-23e7-11ea-a994-fa163e34d433 0xc000f8308f 0xc000f830a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jcjxd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jcjxd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jcjxd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000f83120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000f83160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 11:48:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:48:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-b2jtx" for this suite.
Dec 21 11:48:53.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:48:53.593: INFO: namespace: e2e-tests-deployment-b2jtx, resource: bindings, ignored listing per whitelist
Dec 21 11:48:53.724: INFO: namespace e2e-tests-deployment-b2jtx deletion completed in 13.178992945s

• [SLOW TEST:26.987 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:48:53.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2tq96
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2tq96
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2tq96
Dec 21 11:48:54.215: INFO: Found 0 stateful pods, waiting for 1
Dec 21 11:49:04.236: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 21 11:49:04.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 11:49:05.070: INFO: stderr: ""
Dec 21 11:49:05.070: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 11:49:05.070: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 11:49:05.094: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 21 11:49:15.115: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 11:49:15.115: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 11:49:15.207: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:15.207: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:15.207: INFO: ss-1                              Pending         []
Dec 21 11:49:15.207: INFO: 
Dec 21 11:49:15.207: INFO: StatefulSet ss has not reached scale 3, at 2
Dec 21 11:49:16.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.947691632s
Dec 21 11:49:18.090: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.173437485s
Dec 21 11:49:19.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.065277504s
Dec 21 11:49:20.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.046302721s
Dec 21 11:49:21.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.013762987s
Dec 21 11:49:22.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.98982083s
Dec 21 11:49:23.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.545574817s
Dec 21 11:49:24.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 522.409245ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2tq96
Dec 21 11:49:25.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:49:26.722: INFO: stderr: ""
Dec 21 11:49:26.722: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 11:49:26.722: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 11:49:26.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:49:27.306: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 21 11:49:27.306: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 11:49:27.306: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 11:49:27.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:49:27.979: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
Dec 21 11:49:27.979: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 21 11:49:27.979: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 21 11:49:28.038: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 11:49:28.038: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Dec 21 11:49:38.063: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 11:49:38.064: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 11:49:38.064: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 21 11:49:38.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 11:49:38.714: INFO: stderr: ""
Dec 21 11:49:38.714: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 11:49:38.714: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 11:49:38.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 11:49:39.352: INFO: stderr: ""
Dec 21 11:49:39.352: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 11:49:39.352: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 11:49:39.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 21 11:49:40.084: INFO: stderr: ""
Dec 21 11:49:40.084: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 21 11:49:40.084: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 21 11:49:40.084: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 11:49:40.094: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 21 11:49:50.127: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 11:49:50.127: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 11:49:50.127: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 21 11:49:50.166: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:50.167: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:50.167: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:50.167: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:50.167: INFO: 
Dec 21 11:49:50.167: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:51.189: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:51.189: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:51.189: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:51.189: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:51.189: INFO: 
Dec 21 11:49:51.189: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:52.952: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:52.952: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:52.953: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:52.953: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:52.953: INFO: 
Dec 21 11:49:52.953: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:53.990: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:53.990: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:53.991: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:53.991: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:53.991: INFO: 
Dec 21 11:49:53.991: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:55.005: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:55.005: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:55.005: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:55.005: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:55.005: INFO: 
Dec 21 11:49:55.005: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:56.523: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:56.523: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:56.524: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:56.524: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:56.524: INFO: 
Dec 21 11:49:56.524: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:57.993: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:57.993: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:57.993: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:57.993: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:57.993: INFO: 
Dec 21 11:49:57.993: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:49:59.009: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:49:59.009: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:49:59.009: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:59.009: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:49:59.009: INFO: 
Dec 21 11:49:59.009: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 21 11:50:00.025: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Dec 21 11:50:00.025: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:48:54 +0000 UTC  }]
Dec 21 11:50:00.025: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:50:00.025: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 11:49:15 +0000 UTC  }]
Dec 21 11:50:00.026: INFO: 
Dec 21 11:50:00.026: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2tq96
Dec 21 11:50:01.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:50:01.275: INFO: rc: 1
Dec 21 11:50:01.276: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001b68e40 exit status 1   true [0xc001b97b88 0xc001b97ba0 0xc001b97bc0] [0xc001b97b88 0xc001b97ba0 0xc001b97bc0] [0xc001b97b98 0xc001b97bb0] [0x935700 0x935700] 0xc001abae40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 21 11:50:11.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:50:11.579: INFO: rc: 1
Dec 21 11:50:11.579: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000bdcd80 exit status 1   true [0xc0006abfa8 0xc0006abfc0 0xc0006abfd8] [0xc0006abfa8 0xc0006abfc0 0xc0006abfd8] [0xc0006abfb8 0xc0006abfd0] [0x935700 0x935700] 0xc00228b560 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Dec 21 11:50:21.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:50:21.850: INFO: rc: 1
Dec 21 11:50:21.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e120 exit status 1   true [0xc0000e8238 0xc0000e8b60 0xc0000e8c78] [0xc0000e8238 0xc0000e8b60 0xc0000e8c78] [0xc0000e8b10 0xc0000e8c58] [0x935700 0x935700] 0xc001ff9080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:50:31.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:50:32.030: INFO: rc: 1
Dec 21 11:50:32.030: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c120 exit status 1   true [0xc000184000 0xc00000e278 0xc00000e370] [0xc000184000 0xc00000e278 0xc00000e370] [0xc00000e188 0xc00000e318] [0x935700 0x935700] 0xc00215ac00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:50:42.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:50:42.151: INFO: rc: 1
Dec 21 11:50:42.152: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c270 exit status 1   true [0xc00000e3b0 0xc00000e618 0xc00000e6f8] [0xc00000e3b0 0xc00000e618 0xc00000e6f8] [0xc00000e488 0xc00000e6c8] [0x935700 0x935700] 0xc00215afc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:50:52.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:50:52.311: INFO: rc: 1
Dec 21 11:50:52.311: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c390 exit status 1   true [0xc00000e728 0xc00000e840 0xc00000e870] [0xc00000e728 0xc00000e840 0xc00000e870] [0xc00000e810 0xc00000e860] [0x935700 0x935700] 0xc00215b6e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:51:02.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:51:02.522: INFO: rc: 1
Dec 21 11:51:02.522: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c4b0 exit status 1   true [0xc00000e878 0xc00000e8c0 0xc00000e970] [0xc00000e878 0xc00000e8c0 0xc00000e970] [0xc00000e898 0xc00000e930] [0x935700 0x935700] 0xc00215bb60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:51:12.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:51:12.705: INFO: rc: 1
Dec 21 11:51:12.705: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c600 exit status 1   true [0xc00000e9a0 0xc00000ea28 0xc00000ea58] [0xc00000e9a0 0xc00000ea28 0xc00000ea58] [0xc00000ea10 0xc00000ea50] [0x935700 0x935700] 0xc001d20120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:51:22.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:51:22.906: INFO: rc: 1
Dec 21 11:51:22.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c720 exit status 1   true [0xc00000eb58 0xc00000eb90 0xc00000ebe0] [0xc00000eb58 0xc00000eb90 0xc00000ebe0] [0xc00000eb78 0xc00000ebb8] [0x935700 0x935700] 0xc001d20420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:51:32.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:51:33.061: INFO: rc: 1
Dec 21 11:51:33.061: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c240 exit status 1   true [0xc001b62000 0xc001b62018 0xc001b62030] [0xc001b62000 0xc001b62018 0xc001b62030] [0xc001b62010 0xc001b62028] [0x935700 0x935700] 0xc000cf4e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:51:43.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:51:43.212: INFO: rc: 1
Dec 21 11:51:43.213: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d502d0 exit status 1   true [0xc0006aa058 0xc0006aa0e8 0xc0006aa1d8] [0xc0006aa058 0xc0006aa0e8 0xc0006aa1d8] [0xc0006aa0c8 0xc0006aa1b0] [0x935700 0x935700] 0xc0014e5e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:51:53.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:51:53.402: INFO: rc: 1
Dec 21 11:51:53.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e390 exit status 1   true [0xc0000e8d10 0xc0000e8d60 0xc0000e8e68] [0xc0000e8d10 0xc0000e8d60 0xc0000e8e68] [0xc0000e8d58 0xc0000e8e38] [0x935700 0x935700] 0xc001ff9380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:52:03.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:52:03.528: INFO: rc: 1
Dec 21 11:52:03.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c3c0 exit status 1   true [0xc001b62038 0xc001b62050 0xc001b62068] [0xc001b62038 0xc001b62050 0xc001b62068] [0xc001b62048 0xc001b62060] [0x935700 0x935700] 0xc000cf5140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:52:13.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:52:13.737: INFO: rc: 1
Dec 21 11:52:13.737: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d50420 exit status 1   true [0xc0006aa220 0xc0006aa288 0xc0006aa330] [0xc0006aa220 0xc0006aa288 0xc0006aa330] [0xc0006aa260 0xc0006aa2d8] [0x935700 0x935700] 0xc0017e8060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:52:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:52:23.953: INFO: rc: 1
Dec 21 11:52:23.954: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e150 exit status 1   true [0xc00000e120 0xc00000e2b8 0xc00000e3b0] [0xc00000e120 0xc00000e2b8 0xc00000e3b0] [0xc00000e278 0xc00000e370] [0x935700 0x935700] 0xc0014e5d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:52:33.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:52:34.342: INFO: rc: 1
Dec 21 11:52:34.342: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c150 exit status 1   true [0xc0006aa058 0xc0006aa0e8 0xc0006aa1d8] [0xc0006aa058 0xc0006aa0e8 0xc0006aa1d8] [0xc0006aa0c8 0xc0006aa1b0] [0x935700 0x935700] 0xc00215ac00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:52:44.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:52:44.537: INFO: rc: 1
Dec 21 11:52:44.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c2a0 exit status 1   true [0xc0006aa220 0xc0006aa288 0xc0006aa330] [0xc0006aa220 0xc0006aa288 0xc0006aa330] [0xc0006aa260 0xc0006aa2d8] [0x935700 0x935700] 0xc00215afc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:52:54.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:52:54.739: INFO: rc: 1
Dec 21 11:52:54.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c270 exit status 1   true [0xc0000e8198 0xc0000e8b10 0xc0000e8c58] [0xc0000e8198 0xc0000e8b10 0xc0000e8c58] [0xc0000e82f8 0xc0000e8b98] [0x935700 0x935700] 0xc001ff9080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:53:04.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:53:04.866: INFO: rc: 1
Dec 21 11:53:04.866: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e2a0 exit status 1   true [0xc00000e3c8 0xc00000e680 0xc00000e728] [0xc00000e3c8 0xc00000e680 0xc00000e728] [0xc00000e618 0xc00000e6f8] [0x935700 0x935700] 0xc001d20060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:53:14.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:53:15.001: INFO: rc: 1
Dec 21 11:53:15.001: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c3f0 exit status 1   true [0xc0006aa348 0xc0006aa3c0 0xc0006aa420] [0xc0006aa348 0xc0006aa3c0 0xc0006aa420] [0xc0006aa3a0 0xc0006aa410] [0x935700 0x935700] 0xc00215b6e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:53:25.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:53:25.161: INFO: rc: 1
Dec 21 11:53:25.161: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c510 exit status 1   true [0xc0000e8c78 0xc0000e8d58 0xc0000e8e38] [0xc0000e8c78 0xc0000e8d58 0xc0000e8e38] [0xc0000e8d38 0xc0000e8df0] [0x935700 0x935700] 0xc001ff9380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:53:35.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:53:35.283: INFO: rc: 1
Dec 21 11:53:35.283: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c630 exit status 1   true [0xc0000e8e68 0xc0000e8ec0 0xc0000e8f08] [0xc0000e8e68 0xc0000e8ec0 0xc0000e8f08] [0xc0000e8e90 0xc0000e8ef0] [0x935700 0x935700] 0xc001ff9680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:53:45.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:53:45.516: INFO: rc: 1
Dec 21 11:53:45.516: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c750 exit status 1   true [0xc0000e8f20 0xc0000e8f68 0xc0000e8fa0] [0xc0000e8f20 0xc0000e8f68 0xc0000e8fa0] [0xc0000e8f48 0xc0000e8f98] [0x935700 0x935700] 0xc001ff9f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:53:55.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:53:55.684: INFO: rc: 1
Dec 21 11:53:55.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001b5c8a0 exit status 1   true [0xc0000e8fb8 0xc0000e8ff8 0xc0000e9048] [0xc0000e8fb8 0xc0000e8ff8 0xc0000e9048] [0xc0000e8fe0 0xc0000e9040] [0x935700 0x935700] 0xc0017e8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:54:05.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:54:05.911: INFO: rc: 1
Dec 21 11:54:05.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e510 exit status 1   true [0xc00000e790 0xc00000e858 0xc00000e878] [0xc00000e790 0xc00000e858 0xc00000e878] [0xc00000e840 0xc00000e870] [0x935700 0x935700] 0xc001d20360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:54:15.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:54:16.085: INFO: rc: 1
Dec 21 11:54:16.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e660 exit status 1   true [0xc00000e888 0xc00000e8c8 0xc00000e9a0] [0xc00000e888 0xc00000e8c8 0xc00000e9a0] [0xc00000e8c0 0xc00000e970] [0x935700 0x935700] 0xc001d210e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:54:26.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:54:26.250: INFO: rc: 1
Dec 21 11:54:26.251: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00236c120 exit status 1   true [0xc0000e8198 0xc0000e8b10 0xc0000e8c58] [0xc0000e8198 0xc0000e8b10 0xc0000e8c58] [0xc0000e82f8 0xc0000e8b98] [0x935700 0x935700] 0xc001ff9080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:54:36.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:54:36.371: INFO: rc: 1
Dec 21 11:54:36.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e120 exit status 1   true [0xc00000e120 0xc00000e2b8 0xc00000e3b0] [0xc00000e120 0xc00000e2b8 0xc00000e3b0] [0xc00000e278 0xc00000e370] [0x935700 0x935700] 0xc0014e5e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:54:46.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:54:46.587: INFO: rc: 1
Dec 21 11:54:46.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00237e270 exit status 1   true [0xc00000e3c8 0xc00000e680 0xc00000e728] [0xc00000e3c8 0xc00000e680 0xc00000e728] [0xc00000e618 0xc00000e6f8] [0x935700 0x935700] 0xc00215a720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:54:56.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:54:56.755: INFO: rc: 1
Dec 21 11:54:56.756: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d502d0 exit status 1   true [0xc0006aa058 0xc0006aa0e8 0xc0006aa1d8] [0xc0006aa058 0xc0006aa0e8 0xc0006aa1d8] [0xc0006aa0c8 0xc0006aa1b0] [0x935700 0x935700] 0xc0017e8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Dec 21 11:55:06.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2tq96 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 21 11:55:06.990: INFO: rc: 1
Dec 21 11:55:06.990: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Dec 21 11:55:06.990: INFO: Scaling statefulset ss to 0
Dec 21 11:55:07.005: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 21 11:55:07.009: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2tq96
Dec 21 11:55:07.012: INFO: Scaling statefulset ss to 0
Dec 21 11:55:07.021: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 11:55:07.024: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:55:07.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2tq96" for this suite.
Dec 21 11:55:15.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:55:15.273: INFO: namespace: e2e-tests-statefulset-2tq96, resource: bindings, ignored listing per whitelist
Dec 21 11:55:15.290: INFO: namespace e2e-tests-statefulset-2tq96 deletion completed in 8.187933133s

• [SLOW TEST:381.566 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:55:15.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-r6d9t
Dec 21 11:55:25.769: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-r6d9t
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 11:55:25.776: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:59:27.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-r6d9t" for this suite.
Dec 21 11:59:33.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:59:33.723: INFO: namespace: e2e-tests-container-probe-r6d9t, resource: bindings, ignored listing per whitelist
Dec 21 11:59:33.786: INFO: namespace e2e-tests-container-probe-r6d9t deletion completed in 6.388071412s

• [SLOW TEST:258.495 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:59:33.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 21 11:59:34.207: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 21 11:59:34.271: INFO: Waiting for terminating namespaces to be deleted...
Dec 21 11:59:34.281: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 21 11:59:34.418: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 11:59:34.418: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 21 11:59:34.418: INFO: 	Container coredns ready: true, restart count 0
Dec 21 11:59:34.418: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 21 11:59:34.418: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 11:59:34.418: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 11:59:34.418: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 21 11:59:34.418: INFO: 	Container weave ready: true, restart count 0
Dec 21 11:59:34.418: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 11:59:34.418: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 21 11:59:34.418: INFO: 	Container coredns ready: true, restart count 0
Dec 21 11:59:34.418: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 11:59:34.418: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Dec 21 11:59:34.507: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5abf7e75-23e9-11ea-bbd3-0242ac110005.15e26117da1b931d], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-tnhvf/filler-pod-5abf7e75-23e9-11ea-bbd3-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5abf7e75-23e9-11ea-bbd3-0242ac110005.15e26118e55c8b98], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5abf7e75-23e9-11ea-bbd3-0242ac110005.15e261196e9b303e], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5abf7e75-23e9-11ea-bbd3-0242ac110005.15e261199f307848], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e2611a39c3ab7a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:59:45.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-tnhvf" for this suite.
Dec 21 11:59:54.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 11:59:54.113: INFO: namespace: e2e-tests-sched-pred-tnhvf, resource: bindings, ignored listing per whitelist
Dec 21 11:59:54.378: INFO: namespace e2e-tests-sched-pred-tnhvf deletion completed in 8.414619614s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.592 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 11:59:54.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 11:59:54.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-j5bwk" for this suite.
Dec 21 12:00:00.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:00:00.938: INFO: namespace: e2e-tests-services-j5bwk, resource: bindings, ignored listing per whitelist
Dec 21 12:00:01.102: INFO: namespace e2e-tests-services-j5bwk deletion completed in 6.306640736s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.723 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:00:01.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:00:01.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-2xrsw" to be "success or failure"
Dec 21 12:00:01.325: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.621772ms
Dec 21 12:00:03.385: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072665532s
Dec 21 12:00:05.395: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083633089s
Dec 21 12:00:07.708: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3964611s
Dec 21 12:00:09.730: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418356266s
Dec 21 12:00:11.754: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.442603435s
STEP: Saw pod success
Dec 21 12:00:11.755: INFO: Pod "downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:00:11.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:00:12.424: INFO: Waiting for pod downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:00:12.686: INFO: Pod downwardapi-volume-6aac6f97-23e9-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:00:12.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2xrsw" for this suite.
Dec 21 12:00:18.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:00:18.815: INFO: namespace: e2e-tests-projected-2xrsw, resource: bindings, ignored listing per whitelist
Dec 21 12:00:18.970: INFO: namespace e2e-tests-projected-2xrsw deletion completed in 6.274426931s

• [SLOW TEST:17.867 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:00:18.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:00:32.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-xk5z4" for this suite.
Dec 21 12:00:56.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:00:56.810: INFO: namespace: e2e-tests-replication-controller-xk5z4, resource: bindings, ignored listing per whitelist
Dec 21 12:00:57.092: INFO: namespace e2e-tests-replication-controller-xk5z4 deletion completed in 24.533579847s

• [SLOW TEST:38.122 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:00:57.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 12:00:57.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6rl9z'
Dec 21 12:00:59.889: INFO: stderr: ""
Dec 21 12:00:59.889: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 21 12:01:09.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6rl9z -o json'
Dec 21 12:01:10.178: INFO: stderr: ""
Dec 21 12:01:10.179: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-21T12:00:59Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-6rl9z\",\n        \"resourceVersion\": \"15563568\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-6rl9z/pods/e2e-test-nginx-pod\",\n        \"uid\": \"8d99e192-23e9-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-kbjrg\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-kbjrg\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-kbjrg\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T12:01:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T12:01:09Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T12:01:09Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-21T12:00:59Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://cc86fdb0752bf202e05432162321dcb78445b23a362993dce6e93b15543594f1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-21T12:01:07Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-21T12:01:00Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 21 12:01:10.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-6rl9z'
Dec 21 12:01:10.626: INFO: stderr: ""
Dec 21 12:01:10.627: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Dec 21 12:01:10.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-6rl9z'
Dec 21 12:01:19.965: INFO: stderr: ""
Dec 21 12:01:19.966: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:01:19.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6rl9z" for this suite.
Dec 21 12:01:26.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:01:26.179: INFO: namespace: e2e-tests-kubectl-6rl9z, resource: bindings, ignored listing per whitelist
Dec 21 12:01:26.234: INFO: namespace e2e-tests-kubectl-6rl9z deletion completed in 6.255768413s

• [SLOW TEST:29.141 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:01:26.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xnjfw
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 12:01:26.479: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 12:02:02.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xnjfw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:02:02.960: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:02:03.486: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:02:03.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xnjfw" for this suite.
Dec 21 12:02:29.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:02:29.799: INFO: namespace: e2e-tests-pod-network-test-xnjfw, resource: bindings, ignored listing per whitelist
Dec 21 12:02:29.812: INFO: namespace e2e-tests-pod-network-test-xnjfw deletion completed in 26.262858182s

• [SLOW TEST:63.578 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:02:29.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 21 12:02:30.040: INFO: Waiting up to 5m0s for pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-xh9ms" to be "success or failure"
Dec 21 12:02:30.049: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.016444ms
Dec 21 12:02:32.162: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121643376s
Dec 21 12:02:34.220: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180035227s
Dec 21 12:02:36.235: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194704518s
Dec 21 12:02:38.247: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207201212s
Dec 21 12:02:40.284: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.243461299s
STEP: Saw pod success
Dec 21 12:02:40.284: INFO: Pod "pod-c35e0956-23e9-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:02:40.603: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c35e0956-23e9-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:02:40.717: INFO: Waiting for pod pod-c35e0956-23e9-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:02:40.724: INFO: Pod pod-c35e0956-23e9-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:02:40.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xh9ms" for this suite.
Dec 21 12:02:46.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:02:46.861: INFO: namespace: e2e-tests-emptydir-xh9ms, resource: bindings, ignored listing per whitelist
Dec 21 12:02:46.954: INFO: namespace e2e-tests-emptydir-xh9ms deletion completed in 6.220915838s

• [SLOW TEST:17.142 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:02:46.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-cd9148f6-23e9-11ea-bbd3-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-cd914a6a-23e9-11ea-bbd3-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-cd9148f6-23e9-11ea-bbd3-0242ac110005
STEP: Updating configmap cm-test-opt-upd-cd914a6a-23e9-11ea-bbd3-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-cd914acc-23e9-11ea-bbd3-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:03:05.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5bmgc" for this suite.
Dec 21 12:03:34.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:03:34.136: INFO: namespace: e2e-tests-configmap-5bmgc, resource: bindings, ignored listing per whitelist
Dec 21 12:03:34.197: INFO: namespace e2e-tests-configmap-5bmgc deletion completed in 28.235789915s

• [SLOW TEST:47.243 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:03:34.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:03:34.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-6bskb" to be "success or failure"
Dec 21 12:03:34.540: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.964438ms
Dec 21 12:03:36.613: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085042971s
Dec 21 12:03:38.641: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113317689s
Dec 21 12:03:41.011: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483578663s
Dec 21 12:03:43.028: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500231024s
Dec 21 12:03:45.049: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.521525641s
STEP: Saw pod success
Dec 21 12:03:45.050: INFO: Pod "downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:03:45.111: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:03:45.394: INFO: Waiting for pod downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:03:45.453: INFO: Pod downwardapi-volume-e9cad0c7-23e9-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:03:45.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6bskb" for this suite.
Dec 21 12:03:51.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:03:51.730: INFO: namespace: e2e-tests-downward-api-6bskb, resource: bindings, ignored listing per whitelist
Dec 21 12:03:51.782: INFO: namespace e2e-tests-downward-api-6bskb deletion completed in 6.317331471s

• [SLOW TEST:17.584 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:03:51.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-mjrwb
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-mjrwb to expose endpoints map[]
Dec 21 12:03:52.198: INFO: Get endpoints failed (23.715653ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 21 12:03:53.215: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-mjrwb exposes endpoints map[] (1.040090272s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-mjrwb
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-mjrwb to expose endpoints map[pod1:[80]]
Dec 21 12:03:57.575: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.336934751s elapsed, will retry)
Dec 21 12:04:01.803: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-mjrwb exposes endpoints map[pod1:[80]] (8.564556113s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-mjrwb
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-mjrwb to expose endpoints map[pod1:[80] pod2:[80]]
Dec 21 12:04:06.464: INFO: Unexpected endpoints: found map[f4f4ed7f-23e9-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.59519975s elapsed, will retry)
Dec 21 12:04:11.078: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-mjrwb exposes endpoints map[pod1:[80] pod2:[80]] (9.209458123s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-mjrwb
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-mjrwb to expose endpoints map[pod2:[80]]
Dec 21 12:04:12.165: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-mjrwb exposes endpoints map[pod2:[80]] (1.07558139s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-mjrwb
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-mjrwb to expose endpoints map[]
Dec 21 12:04:13.476: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-mjrwb exposes endpoints map[] (1.294033139s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:04:15.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-mjrwb" for this suite.
Dec 21 12:04:39.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:04:39.779: INFO: namespace: e2e-tests-services-mjrwb, resource: bindings, ignored listing per whitelist
Dec 21 12:04:39.879: INFO: namespace e2e-tests-services-mjrwb deletion completed in 24.761911399s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:48.097 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:04:39.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j5qpt
Dec 21 12:04:50.065: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j5qpt
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 12:04:50.069: INFO: Initial restart count of pod liveness-exec is 0
Dec 21 12:05:49.091: INFO: Restart count of pod e2e-tests-container-probe-j5qpt/liveness-exec is now 1 (59.022668117s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:05:50.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j5qpt" for this suite.
Dec 21 12:05:56.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:05:56.795: INFO: namespace: e2e-tests-container-probe-j5qpt, resource: bindings, ignored listing per whitelist
Dec 21 12:05:56.839: INFO: namespace e2e-tests-container-probe-j5qpt deletion completed in 6.311078584s

• [SLOW TEST:76.960 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:05:56.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 21 12:05:57.020: INFO: Waiting up to 5m0s for pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-7ncl6" to be "success or failure"
Dec 21 12:05:57.124: INFO: Pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 104.550439ms
Dec 21 12:05:59.710: INFO: Pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.690044283s
Dec 21 12:06:02.772: INFO: Pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.752322137s
Dec 21 12:06:04.783: INFO: Pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.763559697s
Dec 21 12:06:06.799: INFO: Pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.779686706s
STEP: Saw pod success
Dec 21 12:06:06.800: INFO: Pod "pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:06:06.803: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:06:06.930: INFO: Waiting for pod pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:06:07.009: INFO: Pod pod-3ebc4e3c-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:06:07.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-7ncl6" for this suite.
Dec 21 12:06:13.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:06:13.266: INFO: namespace: e2e-tests-emptydir-7ncl6, resource: bindings, ignored listing per whitelist
Dec 21 12:06:13.289: INFO: namespace e2e-tests-emptydir-7ncl6 deletion completed in 6.268243812s

• [SLOW TEST:16.449 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:06:13.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:06:13.421: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 21 12:06:13.441: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 21 12:06:19.035: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 21 12:06:23.067: INFO: Creating deployment "test-rolling-update-deployment"
Dec 21 12:06:23.108: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 21 12:06:23.132: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 21 12:06:25.193: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 21 12:06:25.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 12:06:27.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 12:06:29.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 12:06:31.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712526783, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 12:06:33.230: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 21 12:06:33.243: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-s7p8l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s7p8l/deployments/test-rolling-update-deployment,UID:4e45f68f-23ea-11ea-a994-fa163e34d433,ResourceVersion:15564262,Generation:1,CreationTimestamp:2019-12-21 12:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-21 12:06:23 +0000 UTC 2019-12-21 12:06:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-21 12:06:32 +0000 UTC 2019-12-21 12:06:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 21 12:06:33.247: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-s7p8l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s7p8l/replicasets/test-rolling-update-deployment-75db98fb4c,UID:4e6d92a0-23ea-11ea-a994-fa163e34d433,ResourceVersion:15564252,Generation:1,CreationTimestamp:2019-12-21 12:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e45f68f-23ea-11ea-a994-fa163e34d433 0xc001693637 0xc001693638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 21 12:06:33.247: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 21 12:06:33.247: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-s7p8l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-s7p8l/replicasets/test-rolling-update-controller,UID:48856a35-23ea-11ea-a994-fa163e34d433,ResourceVersion:15564261,Generation:2,CreationTimestamp:2019-12-21 12:06:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4e45f68f-23ea-11ea-a994-fa163e34d433 0xc00169329f 0xc0016932c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 12:06:33.253: INFO: Pod "test-rolling-update-deployment-75db98fb4c-wk446" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-wk446,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-s7p8l,SelfLink:/api/v1/namespaces/e2e-tests-deployment-s7p8l/pods/test-rolling-update-deployment-75db98fb4c-wk446,UID:4e71ffeb-23ea-11ea-a994-fa163e34d433,ResourceVersion:15564251,Generation:0,CreationTimestamp:2019-12-21 12:06:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 4e6d92a0-23ea-11ea-a994-fa163e34d433 0xc0020762a7 0xc0020762a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jl4j2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jl4j2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-jl4j2 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002076360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002076380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 12:06:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 12:06:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 12:06:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 12:06:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-21 12:06:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-21 12:06:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9e2ae844112261a5552bd1fd79d9a584602c64bf492395ee275c5f442f885d20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:06:33.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-s7p8l" for this suite.
Dec 21 12:06:41.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:06:41.961: INFO: namespace: e2e-tests-deployment-s7p8l, resource: bindings, ignored listing per whitelist
Dec 21 12:06:42.093: INFO: namespace e2e-tests-deployment-s7p8l deletion completed in 8.833649563s

• [SLOW TEST:28.803 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:06:42.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Dec 21 12:06:42.450: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-l6b67" to be "success or failure"
Dec 21 12:06:42.461: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.538182ms
Dec 21 12:06:44.481: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030598399s
Dec 21 12:06:46.507: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056324997s
Dec 21 12:06:48.589: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138824942s
Dec 21 12:06:52.021: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.570157573s
Dec 21 12:06:54.048: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.597350446s
Dec 21 12:06:56.059: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.60892649s
Dec 21 12:06:58.076: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.625885146s
STEP: Saw pod success
Dec 21 12:06:58.077: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 21 12:06:58.082: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 21 12:06:58.494: INFO: Waiting for pod pod-host-path-test to disappear
Dec 21 12:06:58.527: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:06:58.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-l6b67" for this suite.
Dec 21 12:07:06.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:07:06.876: INFO: namespace: e2e-tests-hostpath-l6b67, resource: bindings, ignored listing per whitelist
Dec 21 12:07:07.293: INFO: namespace e2e-tests-hostpath-l6b67 deletion completed in 8.751913041s

• [SLOW TEST:25.200 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:07:07.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-68bfe9ae-23ea-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:07:07.581: INFO: Waiting up to 5m0s for pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-7wz4s" to be "success or failure"
Dec 21 12:07:07.661: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 80.240754ms
Dec 21 12:07:09.867: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285871565s
Dec 21 12:07:11.902: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320933329s
Dec 21 12:07:13.955: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3736601s
Dec 21 12:07:15.989: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.408224597s
Dec 21 12:07:18.012: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.430974891s
STEP: Saw pod success
Dec 21 12:07:18.012: INFO: Pod "pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:07:18.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 12:07:18.745: INFO: Waiting for pod pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:07:18.763: INFO: Pod pod-secrets-68c3f67b-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:07:18.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7wz4s" for this suite.
Dec 21 12:07:24.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:07:24.874: INFO: namespace: e2e-tests-secrets-7wz4s, resource: bindings, ignored listing per whitelist
Dec 21 12:07:25.160: INFO: namespace e2e-tests-secrets-7wz4s deletion completed in 6.371356886s

• [SLOW TEST:17.866 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:07:25.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-7370e852-23ea-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:07:25.480: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-nfx4f" to be "success or failure"
Dec 21 12:07:25.521: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 41.004668ms
Dec 21 12:07:27.822: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342384379s
Dec 21 12:07:29.838: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358479511s
Dec 21 12:07:31.884: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404383621s
Dec 21 12:07:34.042: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562344241s
Dec 21 12:07:36.316: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.836112172s
STEP: Saw pod success
Dec 21 12:07:36.316: INFO: Pod "pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:07:36.333: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 12:07:36.922: INFO: Waiting for pod pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:07:36.939: INFO: Pod pod-projected-configmaps-7372933b-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:07:36.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nfx4f" for this suite.
Dec 21 12:07:45.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:07:45.150: INFO: namespace: e2e-tests-projected-nfx4f, resource: bindings, ignored listing per whitelist
Dec 21 12:07:45.189: INFO: namespace e2e-tests-projected-nfx4f deletion completed in 8.240864232s

• [SLOW TEST:20.028 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:07:45.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 21 12:07:45.352: INFO: Waiting up to 5m0s for pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-45wd7" to be "success or failure"
Dec 21 12:07:45.377: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.812281ms
Dec 21 12:07:47.723: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371014405s
Dec 21 12:07:49.743: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390651023s
Dec 21 12:07:51.777: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424490268s
Dec 21 12:07:53.789: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436291077s
Dec 21 12:07:55.821: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.468332331s
STEP: Saw pod success
Dec 21 12:07:55.821: INFO: Pod "pod-7f46bdff-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:07:55.854: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-7f46bdff-23ea-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:07:56.020: INFO: Waiting for pod pod-7f46bdff-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:07:56.033: INFO: Pod pod-7f46bdff-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:07:56.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-45wd7" for this suite.
Dec 21 12:08:04.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:08:04.574: INFO: namespace: e2e-tests-emptydir-45wd7, resource: bindings, ignored listing per whitelist
Dec 21 12:08:04.577: INFO: namespace e2e-tests-emptydir-45wd7 deletion completed in 8.528042337s

• [SLOW TEST:19.388 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:08:04.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:08:04.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-mtfpt" to be "success or failure"
Dec 21 12:08:04.812: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.731971ms
Dec 21 12:08:06.863: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089734782s
Dec 21 12:08:08.879: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105447788s
Dec 21 12:08:11.501: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727696625s
Dec 21 12:08:13.554: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780817278s
Dec 21 12:08:15.573: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.799173617s
STEP: Saw pod success
Dec 21 12:08:15.573: INFO: Pod "downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:08:15.578: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:08:15.889: INFO: Waiting for pod downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:08:15.992: INFO: Pod downwardapi-volume-8ae249c8-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:08:15.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mtfpt" for this suite.
Dec 21 12:08:22.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:08:22.193: INFO: namespace: e2e-tests-downward-api-mtfpt, resource: bindings, ignored listing per whitelist
Dec 21 12:08:22.271: INFO: namespace e2e-tests-downward-api-mtfpt deletion completed in 6.254744208s

• [SLOW TEST:17.693 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:08:22.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 21 12:08:22.653: INFO: Waiting up to 5m0s for pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-dkwl7" to be "success or failure"
Dec 21 12:08:22.661: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.927128ms
Dec 21 12:08:24.686: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03249565s
Dec 21 12:08:26.703: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04927653s
Dec 21 12:08:28.788: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133972486s
Dec 21 12:08:30.823: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169328863s
Dec 21 12:08:32.845: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.191296244s
STEP: Saw pod success
Dec 21 12:08:32.845: INFO: Pod "pod-9578d4f1-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:08:32.858: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-9578d4f1-23ea-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:08:33.157: INFO: Waiting for pod pod-9578d4f1-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:08:33.170: INFO: Pod pod-9578d4f1-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:08:33.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dkwl7" for this suite.
Dec 21 12:08:39.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:08:39.383: INFO: namespace: e2e-tests-emptydir-dkwl7, resource: bindings, ignored listing per whitelist
Dec 21 12:08:39.395: INFO: namespace e2e-tests-emptydir-dkwl7 deletion completed in 6.217203736s

• [SLOW TEST:17.124 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:08:39.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Dec 21 12:08:39.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-5dmgf run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 21 12:08:50.835: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 21 12:08:50.836: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:08:52.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5dmgf" for this suite.
Dec 21 12:08:59.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:08:59.301: INFO: namespace: e2e-tests-kubectl-5dmgf, resource: bindings, ignored listing per whitelist
Dec 21 12:08:59.484: INFO: namespace e2e-tests-kubectl-5dmgf deletion completed in 6.614612135s

• [SLOW TEST:20.089 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:08:59.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:08:59.731: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 21 12:08:59.776: INFO: Number of nodes with available pods: 0
Dec 21 12:08:59.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:00.804: INFO: Number of nodes with available pods: 0
Dec 21 12:09:00.804: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:02.080: INFO: Number of nodes with available pods: 0
Dec 21 12:09:02.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:02.790: INFO: Number of nodes with available pods: 0
Dec 21 12:09:02.790: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:03.823: INFO: Number of nodes with available pods: 0
Dec 21 12:09:03.823: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:04.808: INFO: Number of nodes with available pods: 0
Dec 21 12:09:04.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:05.815: INFO: Number of nodes with available pods: 0
Dec 21 12:09:05.815: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:07.176: INFO: Number of nodes with available pods: 0
Dec 21 12:09:07.176: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:07.989: INFO: Number of nodes with available pods: 0
Dec 21 12:09:07.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:08.907: INFO: Number of nodes with available pods: 0
Dec 21 12:09:08.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:10.020: INFO: Number of nodes with available pods: 0
Dec 21 12:09:10.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:10.886: INFO: Number of nodes with available pods: 1
Dec 21 12:09:10.886: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 21 12:09:10.947: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:12.065: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:13.078: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:14.090: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:15.078: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:16.063: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:17.064: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:18.063: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:18.063: INFO: Pod daemon-set-hdkj2 is not available
Dec 21 12:09:19.096: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:19.096: INFO: Pod daemon-set-hdkj2 is not available
Dec 21 12:09:20.060: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:20.060: INFO: Pod daemon-set-hdkj2 is not available
Dec 21 12:09:21.064: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:21.064: INFO: Pod daemon-set-hdkj2 is not available
Dec 21 12:09:22.078: INFO: Wrong image for pod: daemon-set-hdkj2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 21 12:09:22.078: INFO: Pod daemon-set-hdkj2 is not available
Dec 21 12:09:23.066: INFO: Pod daemon-set-ss78g is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 21 12:09:23.165: INFO: Number of nodes with available pods: 0
Dec 21 12:09:23.165: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:25.273: INFO: Number of nodes with available pods: 0
Dec 21 12:09:25.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:26.189: INFO: Number of nodes with available pods: 0
Dec 21 12:09:26.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:27.210: INFO: Number of nodes with available pods: 0
Dec 21 12:09:27.211: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:28.999: INFO: Number of nodes with available pods: 0
Dec 21 12:09:28.999: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:29.268: INFO: Number of nodes with available pods: 0
Dec 21 12:09:29.268: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:30.307: INFO: Number of nodes with available pods: 0
Dec 21 12:09:30.307: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:31.212: INFO: Number of nodes with available pods: 0
Dec 21 12:09:31.213: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:32.222: INFO: Number of nodes with available pods: 0
Dec 21 12:09:32.223: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:09:33.201: INFO: Number of nodes with available pods: 1
Dec 21 12:09:33.201: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-s8nq8, will wait for the garbage collector to delete the pods
Dec 21 12:09:33.311: INFO: Deleting DaemonSet.extensions daemon-set took: 16.632652ms
Dec 21 12:09:33.512: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.71587ms
Dec 21 12:09:42.978: INFO: Number of nodes with available pods: 0
Dec 21 12:09:42.979: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 12:09:43.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-s8nq8/daemonsets","resourceVersion":"15564734"},"items":null}

Dec 21 12:09:43.089: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-s8nq8/pods","resourceVersion":"15564734"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:09:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-s8nq8" for this suite.
Dec 21 12:09:49.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:09:49.393: INFO: namespace: e2e-tests-daemonsets-s8nq8, resource: bindings, ignored listing per whitelist
Dec 21 12:09:49.448: INFO: namespace e2e-tests-daemonsets-s8nq8 deletion completed in 6.289059168s

• [SLOW TEST:49.963 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:09:49.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-c960c203-23ea-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:09:49.682: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-fnpbg" to be "success or failure"
Dec 21 12:09:49.690: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033077ms
Dec 21 12:09:51.706: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024672055s
Dec 21 12:09:53.752: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070159435s
Dec 21 12:09:55.792: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110553288s
Dec 21 12:09:58.399: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.717237135s
Dec 21 12:10:00.424: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.74201506s
STEP: Saw pod success
Dec 21 12:10:00.424: INFO: Pod "pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:10:00.444: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 12:10:01.584: INFO: Waiting for pod pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:10:01.607: INFO: Pod pod-projected-secrets-c963aa78-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:10:01.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fnpbg" for this suite.
Dec 21 12:10:07.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:10:07.810: INFO: namespace: e2e-tests-projected-fnpbg, resource: bindings, ignored listing per whitelist
Dec 21 12:10:07.932: INFO: namespace e2e-tests-projected-fnpbg deletion completed in 6.315773807s

• [SLOW TEST:18.484 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:10:07.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 21 12:10:08.363: INFO: Waiting up to 5m0s for pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-p8789" to be "success or failure"
Dec 21 12:10:08.460: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 96.795591ms
Dec 21 12:10:10.502: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139322705s
Dec 21 12:10:12.533: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170312248s
Dec 21 12:10:14.582: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219533712s
Dec 21 12:10:16.652: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289187047s
Dec 21 12:10:18.675: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.311851967s
STEP: Saw pod success
Dec 21 12:10:18.675: INFO: Pod "pod-d465312d-23ea-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:10:18.681: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d465312d-23ea-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:10:18.839: INFO: Waiting for pod pod-d465312d-23ea-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:10:18.848: INFO: Pod pod-d465312d-23ea-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:10:18.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p8789" for this suite.
Dec 21 12:10:25.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:10:26.110: INFO: namespace: e2e-tests-emptydir-p8789, resource: bindings, ignored listing per whitelist
Dec 21 12:10:26.177: INFO: namespace e2e-tests-emptydir-p8789 deletion completed in 7.278767915s

• [SLOW TEST:18.245 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:10:26.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:11:26.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-px69c" for this suite.
Dec 21 12:11:50.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:11:50.663: INFO: namespace: e2e-tests-container-probe-px69c, resource: bindings, ignored listing per whitelist
Dec 21 12:11:50.698: INFO: namespace e2e-tests-container-probe-px69c deletion completed in 24.254543577s

• [SLOW TEST:84.521 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:11:50.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 21 12:11:50.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:11:53.315: INFO: stderr: ""
Dec 21 12:11:53.315: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 12:11:53.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:11:53.630: INFO: stderr: ""
Dec 21 12:11:53.630: INFO: stdout: "update-demo-nautilus-8hf7b update-demo-nautilus-cskh6 "
Dec 21 12:11:53.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hf7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:11:53.804: INFO: stderr: ""
Dec 21 12:11:53.804: INFO: stdout: ""
Dec 21 12:11:53.804: INFO: update-demo-nautilus-8hf7b is created but not running
Dec 21 12:11:58.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:11:59.017: INFO: stderr: ""
Dec 21 12:11:59.018: INFO: stdout: "update-demo-nautilus-8hf7b update-demo-nautilus-cskh6 "
Dec 21 12:11:59.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hf7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:11:59.191: INFO: stderr: ""
Dec 21 12:11:59.191: INFO: stdout: ""
Dec 21 12:11:59.191: INFO: update-demo-nautilus-8hf7b is created but not running
Dec 21 12:12:04.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:04.381: INFO: stderr: ""
Dec 21 12:12:04.381: INFO: stdout: "update-demo-nautilus-8hf7b update-demo-nautilus-cskh6 "
Dec 21 12:12:04.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hf7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:04.665: INFO: stderr: ""
Dec 21 12:12:04.665: INFO: stdout: ""
Dec 21 12:12:04.665: INFO: update-demo-nautilus-8hf7b is created but not running
Dec 21 12:12:09.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:09.932: INFO: stderr: ""
Dec 21 12:12:09.932: INFO: stdout: "update-demo-nautilus-8hf7b update-demo-nautilus-cskh6 "
Dec 21 12:12:09.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hf7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:10.046: INFO: stderr: ""
Dec 21 12:12:10.046: INFO: stdout: "true"
Dec 21 12:12:10.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8hf7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:10.193: INFO: stderr: ""
Dec 21 12:12:10.193: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 12:12:10.193: INFO: validating pod update-demo-nautilus-8hf7b
Dec 21 12:12:10.258: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 12:12:10.258: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 12:12:10.258: INFO: update-demo-nautilus-8hf7b is verified up and running
Dec 21 12:12:10.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cskh6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:10.419: INFO: stderr: ""
Dec 21 12:12:10.419: INFO: stdout: "true"
Dec 21 12:12:10.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cskh6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:10.658: INFO: stderr: ""
Dec 21 12:12:10.658: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 12:12:10.658: INFO: validating pod update-demo-nautilus-cskh6
Dec 21 12:12:10.671: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 12:12:10.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 12:12:10.671: INFO: update-demo-nautilus-cskh6 is verified up and running
STEP: using delete to clean up resources
Dec 21 12:12:10.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:10.849: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 12:12:10.849: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 21 12:12:10.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-58pvs'
Dec 21 12:12:11.039: INFO: stderr: "No resources found.\n"
Dec 21 12:12:11.039: INFO: stdout: ""
Dec 21 12:12:11.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-58pvs -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 12:12:11.225: INFO: stderr: ""
Dec 21 12:12:11.225: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:12:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-58pvs" for this suite.
Dec 21 12:12:35.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:12:35.447: INFO: namespace: e2e-tests-kubectl-58pvs, resource: bindings, ignored listing per whitelist
Dec 21 12:12:35.491: INFO: namespace e2e-tests-kubectl-58pvs deletion completed in 24.247342176s

• [SLOW TEST:44.792 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:12:35.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-2c912402-23eb-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:12:36.170: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-xcp9t" to be "success or failure"
Dec 21 12:12:36.193: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.668951ms
Dec 21 12:12:38.217: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042537633s
Dec 21 12:12:40.231: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05581255s
Dec 21 12:12:42.250: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075390912s
Dec 21 12:12:44.441: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266721094s
Dec 21 12:12:46.468: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293014748s
Dec 21 12:12:48.511: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.336318128s
STEP: Saw pod success
Dec 21 12:12:48.511: INFO: Pod "pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:12:48.544: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 12:12:48.829: INFO: Waiting for pod pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:12:48.834: INFO: Pod pod-configmaps-2c929c19-23eb-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:12:48.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xcp9t" for this suite.
Dec 21 12:12:54.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:12:55.120: INFO: namespace: e2e-tests-configmap-xcp9t, resource: bindings, ignored listing per whitelist
Dec 21 12:12:55.155: INFO: namespace e2e-tests-configmap-xcp9t deletion completed in 6.311093903s

• [SLOW TEST:19.664 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:12:55.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 12:12:55.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-thmrh'
Dec 21 12:12:55.527: INFO: stderr: ""
Dec 21 12:12:55.527: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Dec 21 12:12:55.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-thmrh'
Dec 21 12:13:02.242: INFO: stderr: ""
Dec 21 12:13:02.242: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:13:02.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-thmrh" for this suite.
Dec 21 12:13:08.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:13:08.372: INFO: namespace: e2e-tests-kubectl-thmrh, resource: bindings, ignored listing per whitelist
Dec 21 12:13:08.616: INFO: namespace e2e-tests-kubectl-thmrh deletion completed in 6.360063402s

• [SLOW TEST:13.460 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:13:08.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-v6l22
I1221 12:13:08.779301       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-v6l22, replica count: 1
I1221 12:13:09.830216       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:10.830709       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:11.831057       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:12.831501       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:13.832011       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:14.832459       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:15.833509       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:16.833999       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:17.834935       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1221 12:13:18.835764       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 21 12:13:18.983: INFO: Created: latency-svc-j6msl
Dec 21 12:13:19.071: INFO: Got endpoints: latency-svc-j6msl [135.303595ms]
Dec 21 12:13:19.274: INFO: Created: latency-svc-kszzt
Dec 21 12:13:19.316: INFO: Got endpoints: latency-svc-kszzt [241.447454ms]
Dec 21 12:13:19.462: INFO: Created: latency-svc-b88lp
Dec 21 12:13:19.465: INFO: Got endpoints: latency-svc-b88lp [392.613101ms]
Dec 21 12:13:19.502: INFO: Created: latency-svc-swncd
Dec 21 12:13:19.525: INFO: Got endpoints: latency-svc-swncd [450.951922ms]
Dec 21 12:13:19.675: INFO: Created: latency-svc-889dq
Dec 21 12:13:19.728: INFO: Got endpoints: latency-svc-889dq [653.995871ms]
Dec 21 12:13:19.833: INFO: Created: latency-svc-wmhx6
Dec 21 12:13:19.884: INFO: Got endpoints: latency-svc-wmhx6 [809.829895ms]
Dec 21 12:13:20.019: INFO: Created: latency-svc-vzmbb
Dec 21 12:13:20.054: INFO: Got endpoints: latency-svc-vzmbb [979.031624ms]
Dec 21 12:13:20.107: INFO: Created: latency-svc-hxswj
Dec 21 12:13:20.303: INFO: Got endpoints: latency-svc-hxswj [1.228336002s]
Dec 21 12:13:20.331: INFO: Created: latency-svc-m8qws
Dec 21 12:13:20.339: INFO: Got endpoints: latency-svc-m8qws [1.264340798s]
Dec 21 12:13:20.488: INFO: Created: latency-svc-j9mpd
Dec 21 12:13:20.547: INFO: Got endpoints: latency-svc-j9mpd [1.472335188s]
Dec 21 12:13:20.571: INFO: Created: latency-svc-wmxqg
Dec 21 12:13:20.725: INFO: Got endpoints: latency-svc-wmxqg [1.650622071s]
Dec 21 12:13:20.769: INFO: Created: latency-svc-qznsb
Dec 21 12:13:20.800: INFO: Got endpoints: latency-svc-qznsb [1.72542832s]
Dec 21 12:13:20.966: INFO: Created: latency-svc-xv2dr
Dec 21 12:13:20.966: INFO: Got endpoints: latency-svc-xv2dr [1.891288896s]
Dec 21 12:13:21.155: INFO: Created: latency-svc-mbzgn
Dec 21 12:13:21.204: INFO: Got endpoints: latency-svc-mbzgn [2.129303374s]
Dec 21 12:13:21.394: INFO: Created: latency-svc-mnctm
Dec 21 12:13:21.405: INFO: Got endpoints: latency-svc-mnctm [2.329829651s]
Dec 21 12:13:21.473: INFO: Created: latency-svc-nvbww
Dec 21 12:13:21.637: INFO: Got endpoints: latency-svc-nvbww [2.561714447s]
Dec 21 12:13:21.666: INFO: Created: latency-svc-8hb7m
Dec 21 12:13:21.681: INFO: Got endpoints: latency-svc-8hb7m [2.364833214s]
Dec 21 12:13:21.879: INFO: Created: latency-svc-qwsfc
Dec 21 12:13:21.888: INFO: Got endpoints: latency-svc-qwsfc [2.422644504s]
Dec 21 12:13:22.154: INFO: Created: latency-svc-wnnw6
Dec 21 12:13:22.178: INFO: Got endpoints: latency-svc-wnnw6 [2.652866112s]
Dec 21 12:13:22.426: INFO: Created: latency-svc-cq2cn
Dec 21 12:13:22.437: INFO: Got endpoints: latency-svc-cq2cn [2.708958832s]
Dec 21 12:13:22.609: INFO: Created: latency-svc-bwhzd
Dec 21 12:13:22.677: INFO: Got endpoints: latency-svc-bwhzd [2.79302631s]
Dec 21 12:13:22.908: INFO: Created: latency-svc-g542m
Dec 21 12:13:23.050: INFO: Got endpoints: latency-svc-g542m [2.996509377s]
Dec 21 12:13:23.082: INFO: Created: latency-svc-9wmmz
Dec 21 12:13:23.270: INFO: Got endpoints: latency-svc-9wmmz [2.966497214s]
Dec 21 12:13:23.335: INFO: Created: latency-svc-w26wg
Dec 21 12:13:23.349: INFO: Got endpoints: latency-svc-w26wg [3.009767344s]
Dec 21 12:13:23.453: INFO: Created: latency-svc-z8hxw
Dec 21 12:13:23.514: INFO: Got endpoints: latency-svc-z8hxw [2.966981216s]
Dec 21 12:13:23.656: INFO: Created: latency-svc-66t2x
Dec 21 12:13:23.713: INFO: Created: latency-svc-z5rc7
Dec 21 12:13:23.715: INFO: Got endpoints: latency-svc-66t2x [2.988746582s]
Dec 21 12:13:23.735: INFO: Got endpoints: latency-svc-z5rc7 [2.934181778s]
Dec 21 12:13:23.859: INFO: Created: latency-svc-n5hr6
Dec 21 12:13:23.877: INFO: Got endpoints: latency-svc-n5hr6 [2.910934027s]
Dec 21 12:13:23.935: INFO: Created: latency-svc-htd9f
Dec 21 12:13:23.953: INFO: Got endpoints: latency-svc-htd9f [2.748264746s]
Dec 21 12:13:24.106: INFO: Created: latency-svc-ff5z6
Dec 21 12:13:24.139: INFO: Got endpoints: latency-svc-ff5z6 [2.733800369s]
Dec 21 12:13:24.199: INFO: Created: latency-svc-74dxx
Dec 21 12:13:24.319: INFO: Got endpoints: latency-svc-74dxx [2.681268338s]
Dec 21 12:13:24.406: INFO: Created: latency-svc-qhndt
Dec 21 12:13:24.622: INFO: Got endpoints: latency-svc-qhndt [2.941448695s]
Dec 21 12:13:24.645: INFO: Created: latency-svc-95lzw
Dec 21 12:13:24.662: INFO: Got endpoints: latency-svc-95lzw [2.774124081s]
Dec 21 12:13:24.728: INFO: Created: latency-svc-j6m6x
Dec 21 12:13:24.822: INFO: Got endpoints: latency-svc-j6m6x [2.644453645s]
Dec 21 12:13:24.853: INFO: Created: latency-svc-p259d
Dec 21 12:13:24.863: INFO: Got endpoints: latency-svc-p259d [2.425701204s]
Dec 21 12:13:25.003: INFO: Created: latency-svc-pskm7
Dec 21 12:13:25.009: INFO: Got endpoints: latency-svc-pskm7 [2.331595004s]
Dec 21 12:13:25.264: INFO: Created: latency-svc-cnxr2
Dec 21 12:13:25.291: INFO: Got endpoints: latency-svc-cnxr2 [2.240368452s]
Dec 21 12:13:25.415: INFO: Created: latency-svc-qmt7k
Dec 21 12:13:25.431: INFO: Got endpoints: latency-svc-qmt7k [2.161173907s]
Dec 21 12:13:25.514: INFO: Created: latency-svc-tnk5d
Dec 21 12:13:25.706: INFO: Got endpoints: latency-svc-tnk5d [2.356947078s]
Dec 21 12:13:25.747: INFO: Created: latency-svc-r77qk
Dec 21 12:13:25.753: INFO: Got endpoints: latency-svc-r77qk [2.238671834s]
Dec 21 12:13:25.938: INFO: Created: latency-svc-f8wrk
Dec 21 12:13:25.973: INFO: Got endpoints: latency-svc-f8wrk [2.25819698s]
Dec 21 12:13:26.012: INFO: Created: latency-svc-5wgf4
Dec 21 12:13:26.167: INFO: Got endpoints: latency-svc-5wgf4 [2.432157867s]
Dec 21 12:13:26.251: INFO: Created: latency-svc-5w69h
Dec 21 12:13:26.259: INFO: Got endpoints: latency-svc-5w69h [2.38139239s]
Dec 21 12:13:26.405: INFO: Created: latency-svc-qtds7
Dec 21 12:13:26.438: INFO: Got endpoints: latency-svc-qtds7 [2.485421054s]
Dec 21 12:13:26.687: INFO: Created: latency-svc-hc96g
Dec 21 12:13:26.716: INFO: Got endpoints: latency-svc-hc96g [2.576175865s]
Dec 21 12:13:26.965: INFO: Created: latency-svc-f6hn4
Dec 21 12:13:26.981: INFO: Got endpoints: latency-svc-f6hn4 [2.662330167s]
Dec 21 12:13:27.017: INFO: Created: latency-svc-drnld
Dec 21 12:13:27.036: INFO: Got endpoints: latency-svc-drnld [2.412826079s]
Dec 21 12:13:27.218: INFO: Created: latency-svc-wrn2n
Dec 21 12:13:27.230: INFO: Got endpoints: latency-svc-wrn2n [2.567554278s]
Dec 21 12:13:27.377: INFO: Created: latency-svc-jqqrv
Dec 21 12:13:27.393: INFO: Got endpoints: latency-svc-jqqrv [2.570660709s]
Dec 21 12:13:27.440: INFO: Created: latency-svc-sqt9x
Dec 21 12:13:27.579: INFO: Got endpoints: latency-svc-sqt9x [2.715757811s]
Dec 21 12:13:27.619: INFO: Created: latency-svc-xvmr4
Dec 21 12:13:27.645: INFO: Got endpoints: latency-svc-xvmr4 [2.636162093s]
Dec 21 12:13:27.778: INFO: Created: latency-svc-g5gl8
Dec 21 12:13:27.809: INFO: Got endpoints: latency-svc-g5gl8 [2.517584179s]
Dec 21 12:13:27.984: INFO: Created: latency-svc-fhqfb
Dec 21 12:13:27.991: INFO: Got endpoints: latency-svc-fhqfb [2.559136106s]
Dec 21 12:13:28.028: INFO: Created: latency-svc-ch29d
Dec 21 12:13:28.049: INFO: Got endpoints: latency-svc-ch29d [2.343035653s]
Dec 21 12:13:28.309: INFO: Created: latency-svc-h7rnm
Dec 21 12:13:28.351: INFO: Got endpoints: latency-svc-h7rnm [2.598335166s]
Dec 21 12:13:28.383: INFO: Created: latency-svc-fl9cv
Dec 21 12:13:28.489: INFO: Got endpoints: latency-svc-fl9cv [2.516038146s]
Dec 21 12:13:28.536: INFO: Created: latency-svc-l5mbc
Dec 21 12:13:28.773: INFO: Got endpoints: latency-svc-l5mbc [2.6060047s]
Dec 21 12:13:28.796: INFO: Created: latency-svc-77q99
Dec 21 12:13:28.796: INFO: Got endpoints: latency-svc-77q99 [2.537300512s]
Dec 21 12:13:28.841: INFO: Created: latency-svc-h26g5
Dec 21 12:13:29.075: INFO: Got endpoints: latency-svc-h26g5 [2.636336652s]
Dec 21 12:13:29.320: INFO: Created: latency-svc-chdg8
Dec 21 12:13:29.358: INFO: Created: latency-svc-sstbc
Dec 21 12:13:29.830: INFO: Got endpoints: latency-svc-chdg8 [3.113931761s]
Dec 21 12:13:29.830: INFO: Got endpoints: latency-svc-sstbc [2.848241357s]
Dec 21 12:13:29.857: INFO: Created: latency-svc-8xzjs
Dec 21 12:13:29.867: INFO: Got endpoints: latency-svc-8xzjs [2.831343709s]
Dec 21 12:13:29.927: INFO: Created: latency-svc-m659q
Dec 21 12:13:30.016: INFO: Got endpoints: latency-svc-m659q [2.785859495s]
Dec 21 12:13:30.039: INFO: Created: latency-svc-vf72r
Dec 21 12:13:30.051: INFO: Got endpoints: latency-svc-vf72r [2.657463884s]
Dec 21 12:13:30.086: INFO: Created: latency-svc-p5t6m
Dec 21 12:13:30.102: INFO: Got endpoints: latency-svc-p5t6m [2.522746104s]
Dec 21 12:13:30.224: INFO: Created: latency-svc-ncplc
Dec 21 12:13:30.230: INFO: Got endpoints: latency-svc-ncplc [2.585012624s]
Dec 21 12:13:30.425: INFO: Created: latency-svc-xchb5
Dec 21 12:13:30.458: INFO: Got endpoints: latency-svc-xchb5 [2.649152477s]
Dec 21 12:13:30.637: INFO: Created: latency-svc-xb8tq
Dec 21 12:13:30.654: INFO: Got endpoints: latency-svc-xb8tq [2.663544628s]
Dec 21 12:13:30.952: INFO: Created: latency-svc-xppls
Dec 21 12:13:30.994: INFO: Got endpoints: latency-svc-xppls [2.945186298s]
Dec 21 12:13:31.003: INFO: Created: latency-svc-2dct6
Dec 21 12:13:31.018: INFO: Got endpoints: latency-svc-2dct6 [2.666219617s]
Dec 21 12:13:31.266: INFO: Created: latency-svc-rvcc8
Dec 21 12:13:31.364: INFO: Got endpoints: latency-svc-rvcc8 [2.873905335s]
Dec 21 12:13:31.425: INFO: Created: latency-svc-97fhc
Dec 21 12:13:31.429: INFO: Got endpoints: latency-svc-97fhc [2.655652033s]
Dec 21 12:13:31.611: INFO: Created: latency-svc-57twg
Dec 21 12:13:31.611: INFO: Got endpoints: latency-svc-57twg [2.815117527s]
Dec 21 12:13:31.751: INFO: Created: latency-svc-qg2pg
Dec 21 12:13:31.797: INFO: Got endpoints: latency-svc-qg2pg [2.721698663s]
Dec 21 12:13:31.998: INFO: Created: latency-svc-r8bfs
Dec 21 12:13:32.037: INFO: Got endpoints: latency-svc-r8bfs [2.206915349s]
Dec 21 12:13:32.250: INFO: Created: latency-svc-l89hq
Dec 21 12:13:32.276: INFO: Got endpoints: latency-svc-l89hq [2.446027885s]
Dec 21 12:13:32.311: INFO: Created: latency-svc-pwmsz
Dec 21 12:13:32.475: INFO: Got endpoints: latency-svc-pwmsz [2.607722161s]
Dec 21 12:13:32.501: INFO: Created: latency-svc-nfl5t
Dec 21 12:13:32.534: INFO: Got endpoints: latency-svc-nfl5t [2.517309584s]
Dec 21 12:13:32.698: INFO: Created: latency-svc-8tn4m
Dec 21 12:13:32.720: INFO: Got endpoints: latency-svc-8tn4m [2.669020871s]
Dec 21 12:13:32.764: INFO: Created: latency-svc-wqzg5
Dec 21 12:13:32.900: INFO: Got endpoints: latency-svc-wqzg5 [2.797698561s]
Dec 21 12:13:32.912: INFO: Created: latency-svc-7gbfx
Dec 21 12:13:32.932: INFO: Got endpoints: latency-svc-7gbfx [2.701787242s]
Dec 21 12:13:32.990: INFO: Created: latency-svc-7p2bw
Dec 21 12:13:33.073: INFO: Got endpoints: latency-svc-7p2bw [2.614020608s]
Dec 21 12:13:33.139: INFO: Created: latency-svc-drz2n
Dec 21 12:13:33.161: INFO: Got endpoints: latency-svc-drz2n [2.506694908s]
Dec 21 12:13:33.268: INFO: Created: latency-svc-xsckk
Dec 21 12:13:33.335: INFO: Got endpoints: latency-svc-xsckk [2.340554601s]
Dec 21 12:13:33.350: INFO: Created: latency-svc-xknxw
Dec 21 12:13:33.364: INFO: Got endpoints: latency-svc-xknxw [2.346559264s]
Dec 21 12:13:33.474: INFO: Created: latency-svc-g4r97
Dec 21 12:13:33.488: INFO: Got endpoints: latency-svc-g4r97 [2.123371824s]
Dec 21 12:13:33.558: INFO: Created: latency-svc-5vzrg
Dec 21 12:13:33.669: INFO: Got endpoints: latency-svc-5vzrg [2.239235465s]
Dec 21 12:13:33.687: INFO: Created: latency-svc-h4q8f
Dec 21 12:13:33.728: INFO: Got endpoints: latency-svc-h4q8f [2.116325575s]
Dec 21 12:13:33.731: INFO: Created: latency-svc-2kptj
Dec 21 12:13:33.755: INFO: Got endpoints: latency-svc-2kptj [1.957568103s]
Dec 21 12:13:33.904: INFO: Created: latency-svc-zn7ss
Dec 21 12:13:33.920: INFO: Got endpoints: latency-svc-zn7ss [1.883333258s]
Dec 21 12:13:34.460: INFO: Created: latency-svc-txkz9
Dec 21 12:13:34.488: INFO: Got endpoints: latency-svc-txkz9 [2.212005137s]
Dec 21 12:13:34.836: INFO: Created: latency-svc-lzcpj
Dec 21 12:13:34.869: INFO: Got endpoints: latency-svc-lzcpj [2.393919097s]
Dec 21 12:13:35.086: INFO: Created: latency-svc-7vlrb
Dec 21 12:13:35.212: INFO: Got endpoints: latency-svc-7vlrb [2.678409551s]
Dec 21 12:13:35.415: INFO: Created: latency-svc-grzbz
Dec 21 12:13:35.415: INFO: Got endpoints: latency-svc-grzbz [2.694866176s]
Dec 21 12:13:35.460: INFO: Created: latency-svc-ncrmk
Dec 21 12:13:35.466: INFO: Got endpoints: latency-svc-ncrmk [2.565191209s]
Dec 21 12:13:35.594: INFO: Created: latency-svc-lsdtt
Dec 21 12:13:35.634: INFO: Got endpoints: latency-svc-lsdtt [2.701185187s]
Dec 21 12:13:35.704: INFO: Created: latency-svc-fhgrd
Dec 21 12:13:35.811: INFO: Got endpoints: latency-svc-fhgrd [2.738275401s]
Dec 21 12:13:35.853: INFO: Created: latency-svc-fvllf
Dec 21 12:13:35.871: INFO: Got endpoints: latency-svc-fvllf [2.710185289s]
Dec 21 12:13:36.022: INFO: Created: latency-svc-dkh9n
Dec 21 12:13:36.045: INFO: Got endpoints: latency-svc-dkh9n [2.709833605s]
Dec 21 12:13:36.225: INFO: Created: latency-svc-r7xh9
Dec 21 12:13:36.405: INFO: Got endpoints: latency-svc-r7xh9 [3.040054533s]
Dec 21 12:13:36.419: INFO: Created: latency-svc-zthzt
Dec 21 12:13:36.430: INFO: Got endpoints: latency-svc-zthzt [2.941943589s]
Dec 21 12:13:36.615: INFO: Created: latency-svc-48k8m
Dec 21 12:13:36.651: INFO: Got endpoints: latency-svc-48k8m [2.982161799s]
Dec 21 12:13:36.818: INFO: Created: latency-svc-zqdjf
Dec 21 12:13:36.831: INFO: Got endpoints: latency-svc-zqdjf [3.103029301s]
Dec 21 12:13:36.989: INFO: Created: latency-svc-fwrxc
Dec 21 12:13:37.012: INFO: Got endpoints: latency-svc-fwrxc [3.256906037s]
Dec 21 12:13:37.068: INFO: Created: latency-svc-zmbv8
Dec 21 12:13:37.168: INFO: Got endpoints: latency-svc-zmbv8 [3.247445219s]
Dec 21 12:13:37.289: INFO: Created: latency-svc-vpcb8
Dec 21 12:13:37.360: INFO: Got endpoints: latency-svc-vpcb8 [2.871671624s]
Dec 21 12:13:37.403: INFO: Created: latency-svc-4qvpb
Dec 21 12:13:37.419: INFO: Got endpoints: latency-svc-4qvpb [2.549671881s]
Dec 21 12:13:37.570: INFO: Created: latency-svc-mc8vf
Dec 21 12:13:37.579: INFO: Got endpoints: latency-svc-mc8vf [2.36594754s]
Dec 21 12:13:37.658: INFO: Created: latency-svc-n7vnj
Dec 21 12:13:37.823: INFO: Got endpoints: latency-svc-n7vnj [2.407420579s]
Dec 21 12:13:37.886: INFO: Created: latency-svc-f245m
Dec 21 12:13:38.090: INFO: Got endpoints: latency-svc-f245m [2.623931589s]
Dec 21 12:13:38.123: INFO: Created: latency-svc-w5pvc
Dec 21 12:13:38.158: INFO: Got endpoints: latency-svc-w5pvc [2.524331298s]
Dec 21 12:13:38.347: INFO: Created: latency-svc-q8f76
Dec 21 12:13:38.371: INFO: Got endpoints: latency-svc-q8f76 [2.559248427s]
Dec 21 12:13:38.458: INFO: Created: latency-svc-8bgz6
Dec 21 12:13:38.540: INFO: Got endpoints: latency-svc-8bgz6 [2.668230757s]
Dec 21 12:13:38.583: INFO: Created: latency-svc-spd9n
Dec 21 12:13:38.602: INFO: Got endpoints: latency-svc-spd9n [2.556909956s]
Dec 21 12:13:38.781: INFO: Created: latency-svc-b6zgh
Dec 21 12:13:38.822: INFO: Got endpoints: latency-svc-b6zgh [2.416946643s]
Dec 21 12:13:38.829: INFO: Created: latency-svc-hrslt
Dec 21 12:13:38.970: INFO: Got endpoints: latency-svc-hrslt [2.539789835s]
Dec 21 12:13:39.006: INFO: Created: latency-svc-w6nrq
Dec 21 12:13:39.021: INFO: Got endpoints: latency-svc-w6nrq [2.369131755s]
Dec 21 12:13:39.229: INFO: Created: latency-svc-94bn6
Dec 21 12:13:39.247: INFO: Got endpoints: latency-svc-94bn6 [2.41588193s]
Dec 21 12:13:39.396: INFO: Created: latency-svc-tzbvg
Dec 21 12:13:39.422: INFO: Got endpoints: latency-svc-tzbvg [2.410009071s]
Dec 21 12:13:39.887: INFO: Created: latency-svc-dmsjs
Dec 21 12:13:39.930: INFO: Got endpoints: latency-svc-dmsjs [2.762063153s]
Dec 21 12:13:40.199: INFO: Created: latency-svc-lfnb7
Dec 21 12:13:40.229: INFO: Got endpoints: latency-svc-lfnb7 [2.868613111s]
Dec 21 12:13:40.429: INFO: Created: latency-svc-fwl6f
Dec 21 12:13:40.543: INFO: Got endpoints: latency-svc-fwl6f [3.123464934s]
Dec 21 12:13:40.584: INFO: Created: latency-svc-dhnrx
Dec 21 12:13:40.638: INFO: Got endpoints: latency-svc-dhnrx [3.059735593s]
Dec 21 12:13:40.644: INFO: Created: latency-svc-rqp82
Dec 21 12:13:40.661: INFO: Got endpoints: latency-svc-rqp82 [2.838619407s]
Dec 21 12:13:40.855: INFO: Created: latency-svc-t467f
Dec 21 12:13:40.882: INFO: Got endpoints: latency-svc-t467f [2.791480397s]
Dec 21 12:13:41.045: INFO: Created: latency-svc-7gxq4
Dec 21 12:13:41.063: INFO: Got endpoints: latency-svc-7gxq4 [2.904230582s]
Dec 21 12:13:41.233: INFO: Created: latency-svc-xx8kc
Dec 21 12:13:41.250: INFO: Got endpoints: latency-svc-xx8kc [2.879286556s]
Dec 21 12:13:41.322: INFO: Created: latency-svc-rh7s8
Dec 21 12:13:41.417: INFO: Got endpoints: latency-svc-rh7s8 [2.87649698s]
Dec 21 12:13:41.446: INFO: Created: latency-svc-58k66
Dec 21 12:13:41.456: INFO: Got endpoints: latency-svc-58k66 [2.852989504s]
Dec 21 12:13:41.623: INFO: Created: latency-svc-xf4wf
Dec 21 12:13:41.655: INFO: Got endpoints: latency-svc-xf4wf [2.832967186s]
Dec 21 12:13:41.662: INFO: Created: latency-svc-gl87w
Dec 21 12:13:41.668: INFO: Got endpoints: latency-svc-gl87w [2.698553165s]
Dec 21 12:13:41.789: INFO: Created: latency-svc-w6444
Dec 21 12:13:41.812: INFO: Got endpoints: latency-svc-w6444 [2.790613834s]
Dec 21 12:13:42.002: INFO: Created: latency-svc-fc5j5
Dec 21 12:13:42.029: INFO: Got endpoints: latency-svc-fc5j5 [2.781543487s]
Dec 21 12:13:42.560: INFO: Created: latency-svc-x64jd
Dec 21 12:13:42.612: INFO: Got endpoints: latency-svc-x64jd [3.190317339s]
Dec 21 12:13:42.965: INFO: Created: latency-svc-tl79f
Dec 21 12:13:43.192: INFO: Got endpoints: latency-svc-tl79f [3.260815609s]
Dec 21 12:13:43.270: INFO: Created: latency-svc-djbvm
Dec 21 12:13:43.473: INFO: Got endpoints: latency-svc-djbvm [3.243794517s]
Dec 21 12:13:43.503: INFO: Created: latency-svc-d2d84
Dec 21 12:13:43.631: INFO: Got endpoints: latency-svc-d2d84 [3.088052288s]
Dec 21 12:13:43.652: INFO: Created: latency-svc-6j7z9
Dec 21 12:13:43.670: INFO: Got endpoints: latency-svc-6j7z9 [3.031114498s]
Dec 21 12:13:43.737: INFO: Created: latency-svc-rrrcs
Dec 21 12:13:43.838: INFO: Got endpoints: latency-svc-rrrcs [3.176451853s]
Dec 21 12:13:44.094: INFO: Created: latency-svc-g7bj4
Dec 21 12:13:44.094: INFO: Got endpoints: latency-svc-g7bj4 [3.212299295s]
Dec 21 12:13:44.218: INFO: Created: latency-svc-s9w6l
Dec 21 12:13:44.255: INFO: Got endpoints: latency-svc-s9w6l [3.191727634s]
Dec 21 12:13:44.339: INFO: Created: latency-svc-48b9c
Dec 21 12:13:44.461: INFO: Got endpoints: latency-svc-48b9c [3.21091462s]
Dec 21 12:13:44.565: INFO: Created: latency-svc-smrmc
Dec 21 12:13:44.651: INFO: Got endpoints: latency-svc-smrmc [3.2339942s]
Dec 21 12:13:44.660: INFO: Created: latency-svc-hqqvf
Dec 21 12:13:44.668: INFO: Got endpoints: latency-svc-hqqvf [3.212618468s]
Dec 21 12:13:44.701: INFO: Created: latency-svc-hrxvv
Dec 21 12:13:44.739: INFO: Got endpoints: latency-svc-hrxvv [3.08368732s]
Dec 21 12:13:44.745: INFO: Created: latency-svc-kwrhw
Dec 21 12:13:44.868: INFO: Got endpoints: latency-svc-kwrhw [3.19912662s]
Dec 21 12:13:44.884: INFO: Created: latency-svc-4lkmq
Dec 21 12:13:44.905: INFO: Got endpoints: latency-svc-4lkmq [3.092315173s]
Dec 21 12:13:44.958: INFO: Created: latency-svc-625zj
Dec 21 12:13:45.036: INFO: Got endpoints: latency-svc-625zj [3.006769938s]
Dec 21 12:13:45.070: INFO: Created: latency-svc-qnlwv
Dec 21 12:13:45.303: INFO: Got endpoints: latency-svc-qnlwv [2.689747846s]
Dec 21 12:13:45.325: INFO: Created: latency-svc-zxnbf
Dec 21 12:13:45.343: INFO: Got endpoints: latency-svc-zxnbf [2.151263287s]
Dec 21 12:13:45.508: INFO: Created: latency-svc-8c9c5
Dec 21 12:13:45.522: INFO: Got endpoints: latency-svc-8c9c5 [2.048331389s]
Dec 21 12:13:45.576: INFO: Created: latency-svc-fgv29
Dec 21 12:13:45.587: INFO: Got endpoints: latency-svc-fgv29 [1.955252426s]
Dec 21 12:13:45.816: INFO: Created: latency-svc-xzxgk
Dec 21 12:13:46.011: INFO: Got endpoints: latency-svc-xzxgk [2.341653421s]
Dec 21 12:13:46.044: INFO: Created: latency-svc-8khj7
Dec 21 12:13:46.053: INFO: Got endpoints: latency-svc-8khj7 [2.214602504s]
Dec 21 12:13:46.244: INFO: Created: latency-svc-h66x9
Dec 21 12:13:46.254: INFO: Got endpoints: latency-svc-h66x9 [2.160063881s]
Dec 21 12:13:46.452: INFO: Created: latency-svc-ll58v
Dec 21 12:13:46.465: INFO: Got endpoints: latency-svc-ll58v [2.209857158s]
Dec 21 12:13:46.647: INFO: Created: latency-svc-522ph
Dec 21 12:13:46.656: INFO: Got endpoints: latency-svc-522ph [2.194762125s]
Dec 21 12:13:46.794: INFO: Created: latency-svc-9tqqr
Dec 21 12:13:46.817: INFO: Got endpoints: latency-svc-9tqqr [2.165471689s]
Dec 21 12:13:46.948: INFO: Created: latency-svc-q8ztf
Dec 21 12:13:46.968: INFO: Got endpoints: latency-svc-q8ztf [2.299889835s]
Dec 21 12:13:47.027: INFO: Created: latency-svc-mhxdh
Dec 21 12:13:47.156: INFO: Got endpoints: latency-svc-mhxdh [2.416842185s]
Dec 21 12:13:47.256: INFO: Created: latency-svc-dxld8
Dec 21 12:13:47.842: INFO: Got endpoints: latency-svc-dxld8 [2.973702085s]
Dec 21 12:13:48.039: INFO: Created: latency-svc-7tvd5
Dec 21 12:13:48.088: INFO: Got endpoints: latency-svc-7tvd5 [3.182916967s]
Dec 21 12:13:48.257: INFO: Created: latency-svc-rqcmm
Dec 21 12:13:48.273: INFO: Got endpoints: latency-svc-rqcmm [3.236842316s]
Dec 21 12:13:48.327: INFO: Created: latency-svc-5wjxs
Dec 21 12:13:48.473: INFO: Got endpoints: latency-svc-5wjxs [3.16946632s]
Dec 21 12:13:48.492: INFO: Created: latency-svc-4qr2g
Dec 21 12:13:48.520: INFO: Got endpoints: latency-svc-4qr2g [3.177070112s]
Dec 21 12:13:48.703: INFO: Created: latency-svc-5rmb4
Dec 21 12:13:48.717: INFO: Got endpoints: latency-svc-5rmb4 [3.1949977s]
Dec 21 12:13:48.923: INFO: Created: latency-svc-kl7wl
Dec 21 12:13:48.944: INFO: Got endpoints: latency-svc-kl7wl [3.357655224s]
Dec 21 12:13:49.108: INFO: Created: latency-svc-rs8vp
Dec 21 12:13:49.143: INFO: Got endpoints: latency-svc-rs8vp [3.131314323s]
Dec 21 12:13:49.258: INFO: Created: latency-svc-9kp4s
Dec 21 12:13:49.271: INFO: Got endpoints: latency-svc-9kp4s [3.21802634s]
Dec 21 12:13:49.320: INFO: Created: latency-svc-kthhk
Dec 21 12:13:49.423: INFO: Got endpoints: latency-svc-kthhk [3.168533176s]
Dec 21 12:13:49.454: INFO: Created: latency-svc-zf6qw
Dec 21 12:13:49.457: INFO: Got endpoints: latency-svc-zf6qw [2.991370917s]
Dec 21 12:13:49.515: INFO: Created: latency-svc-grh95
Dec 21 12:13:49.630: INFO: Got endpoints: latency-svc-grh95 [2.973557834s]
Dec 21 12:13:49.722: INFO: Created: latency-svc-9wn7c
Dec 21 12:13:49.729: INFO: Got endpoints: latency-svc-9wn7c [2.911928626s]
Dec 21 12:13:49.975: INFO: Created: latency-svc-f6k6r
Dec 21 12:13:50.156: INFO: Got endpoints: latency-svc-f6k6r [3.187563748s]
Dec 21 12:13:50.164: INFO: Created: latency-svc-4btq2
Dec 21 12:13:50.190: INFO: Got endpoints: latency-svc-4btq2 [3.033242042s]
Dec 21 12:13:50.344: INFO: Created: latency-svc-gpknc
Dec 21 12:13:50.352: INFO: Got endpoints: latency-svc-gpknc [2.510503055s]
Dec 21 12:13:50.441: INFO: Created: latency-svc-8ddwz
Dec 21 12:13:50.519: INFO: Got endpoints: latency-svc-8ddwz [2.431173834s]
Dec 21 12:13:50.547: INFO: Created: latency-svc-2bx6h
Dec 21 12:13:50.593: INFO: Got endpoints: latency-svc-2bx6h [2.319539082s]
Dec 21 12:13:50.742: INFO: Created: latency-svc-8w9dr
Dec 21 12:13:50.900: INFO: Got endpoints: latency-svc-8w9dr [2.427286662s]
Dec 21 12:13:50.965: INFO: Created: latency-svc-nx6vm
Dec 21 12:13:50.996: INFO: Got endpoints: latency-svc-nx6vm [2.47522013s]
Dec 21 12:13:51.163: INFO: Created: latency-svc-v8h2x
Dec 21 12:13:51.171: INFO: Got endpoints: latency-svc-v8h2x [2.453288299s]
Dec 21 12:13:51.221: INFO: Created: latency-svc-dnc7z
Dec 21 12:13:51.311: INFO: Got endpoints: latency-svc-dnc7z [2.366689667s]
Dec 21 12:13:51.343: INFO: Created: latency-svc-55r46
Dec 21 12:13:51.359: INFO: Got endpoints: latency-svc-55r46 [2.21529819s]
Dec 21 12:13:51.411: INFO: Created: latency-svc-k8hv5
Dec 21 12:13:51.546: INFO: Got endpoints: latency-svc-k8hv5 [2.274826693s]
Dec 21 12:13:51.613: INFO: Created: latency-svc-t6tnh
Dec 21 12:13:51.680: INFO: Got endpoints: latency-svc-t6tnh [2.256722364s]
Dec 21 12:13:51.767: INFO: Created: latency-svc-g84t7
Dec 21 12:13:51.911: INFO: Got endpoints: latency-svc-g84t7 [2.454129573s]
Dec 21 12:13:51.951: INFO: Created: latency-svc-th469
Dec 21 12:13:51.987: INFO: Got endpoints: latency-svc-th469 [2.35698603s]
Dec 21 12:13:52.145: INFO: Created: latency-svc-rtt7r
Dec 21 12:13:52.171: INFO: Got endpoints: latency-svc-rtt7r [2.44209237s]
Dec 21 12:13:52.234: INFO: Created: latency-svc-vcrzw
Dec 21 12:13:52.323: INFO: Got endpoints: latency-svc-vcrzw [2.166552363s]
Dec 21 12:13:52.370: INFO: Created: latency-svc-c9ph9
Dec 21 12:13:52.400: INFO: Got endpoints: latency-svc-c9ph9 [2.209777907s]
Dec 21 12:13:52.412: INFO: Created: latency-svc-rblnf
Dec 21 12:13:52.560: INFO: Got endpoints: latency-svc-rblnf [2.207102184s]
Dec 21 12:13:52.620: INFO: Created: latency-svc-msfgg
Dec 21 12:13:52.803: INFO: Got endpoints: latency-svc-msfgg [2.283246952s]
Dec 21 12:13:52.821: INFO: Created: latency-svc-dqq8s
Dec 21 12:13:52.888: INFO: Got endpoints: latency-svc-dqq8s [2.294672674s]
Dec 21 12:13:52.997: INFO: Created: latency-svc-8mhdv
Dec 21 12:13:53.028: INFO: Got endpoints: latency-svc-8mhdv [2.127559724s]
Dec 21 12:13:53.212: INFO: Created: latency-svc-c7w26
Dec 21 12:13:53.268: INFO: Got endpoints: latency-svc-c7w26 [2.272370623s]
Dec 21 12:13:53.388: INFO: Created: latency-svc-mwqng
Dec 21 12:13:53.400: INFO: Got endpoints: latency-svc-mwqng [2.229016816s]
Dec 21 12:13:53.439: INFO: Created: latency-svc-wnclw
Dec 21 12:13:53.457: INFO: Got endpoints: latency-svc-wnclw [2.145464837s]
Dec 21 12:13:53.634: INFO: Created: latency-svc-drmh4
Dec 21 12:13:53.655: INFO: Got endpoints: latency-svc-drmh4 [2.295832376s]
Dec 21 12:13:53.836: INFO: Created: latency-svc-c4s29
Dec 21 12:13:53.855: INFO: Got endpoints: latency-svc-c4s29 [2.308357785s]
Dec 21 12:13:53.873: INFO: Created: latency-svc-zksdx
Dec 21 12:13:54.069: INFO: Got endpoints: latency-svc-zksdx [2.387612521s]
Dec 21 12:13:54.109: INFO: Created: latency-svc-mzqqq
Dec 21 12:13:54.119: INFO: Got endpoints: latency-svc-mzqqq [2.207347852s]
Dec 21 12:13:54.119: INFO: Latencies: [241.447454ms 392.613101ms 450.951922ms 653.995871ms 809.829895ms 979.031624ms 1.228336002s 1.264340798s 1.472335188s 1.650622071s 1.72542832s 1.883333258s 1.891288896s 1.955252426s 1.957568103s 2.048331389s 2.116325575s 2.123371824s 2.127559724s 2.129303374s 2.145464837s 2.151263287s 2.160063881s 2.161173907s 2.165471689s 2.166552363s 2.194762125s 2.206915349s 2.207102184s 2.207347852s 2.209777907s 2.209857158s 2.212005137s 2.214602504s 2.21529819s 2.229016816s 2.238671834s 2.239235465s 2.240368452s 2.256722364s 2.25819698s 2.272370623s 2.274826693s 2.283246952s 2.294672674s 2.295832376s 2.299889835s 2.308357785s 2.319539082s 2.329829651s 2.331595004s 2.340554601s 2.341653421s 2.343035653s 2.346559264s 2.356947078s 2.35698603s 2.364833214s 2.36594754s 2.366689667s 2.369131755s 2.38139239s 2.387612521s 2.393919097s 2.407420579s 2.410009071s 2.412826079s 2.41588193s 2.416842185s 2.416946643s 2.422644504s 2.425701204s 2.427286662s 2.431173834s 2.432157867s 2.44209237s 2.446027885s 2.453288299s 2.454129573s 2.47522013s 2.485421054s 2.506694908s 2.510503055s 2.516038146s 2.517309584s 2.517584179s 2.522746104s 2.524331298s 2.537300512s 2.539789835s 2.549671881s 2.556909956s 2.559136106s 2.559248427s 2.561714447s 2.565191209s 2.567554278s 2.570660709s 2.576175865s 2.585012624s 2.598335166s 2.6060047s 2.607722161s 2.614020608s 2.623931589s 2.636162093s 2.636336652s 2.644453645s 2.649152477s 2.652866112s 2.655652033s 2.657463884s 2.662330167s 2.663544628s 2.666219617s 2.668230757s 2.669020871s 2.678409551s 2.681268338s 2.689747846s 2.694866176s 2.698553165s 2.701185187s 2.701787242s 2.708958832s 2.709833605s 2.710185289s 2.715757811s 2.721698663s 2.733800369s 2.738275401s 2.748264746s 2.762063153s 2.774124081s 2.781543487s 2.785859495s 2.790613834s 2.791480397s 2.79302631s 2.797698561s 2.815117527s 2.831343709s 2.832967186s 2.838619407s 2.848241357s 2.852989504s 2.868613111s 2.871671624s 2.873905335s 2.87649698s 2.879286556s 2.904230582s 2.910934027s 2.911928626s 2.934181778s 2.941448695s 2.941943589s 2.945186298s 2.966497214s 2.966981216s 2.973557834s 2.973702085s 2.982161799s 2.988746582s 2.991370917s 2.996509377s 3.006769938s 3.009767344s 3.031114498s 3.033242042s 3.040054533s 3.059735593s 3.08368732s 3.088052288s 3.092315173s 3.103029301s 3.113931761s 3.123464934s 3.131314323s 3.168533176s 3.16946632s 3.176451853s 3.177070112s 3.182916967s 3.187563748s 3.190317339s 3.191727634s 3.1949977s 3.19912662s 3.21091462s 3.212299295s 3.212618468s 3.21802634s 3.2339942s 3.236842316s 3.243794517s 3.247445219s 3.256906037s 3.260815609s 3.357655224s]
Dec 21 12:13:54.120: INFO: 50 %ile: 2.598335166s
Dec 21 12:13:54.120: INFO: 90 %ile: 3.16946632s
Dec 21 12:13:54.120: INFO: 99 %ile: 3.260815609s
Dec 21 12:13:54.120: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:13:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-v6l22" for this suite.
Dec 21 12:14:50.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:14:50.467: INFO: namespace: e2e-tests-svc-latency-v6l22, resource: bindings, ignored listing per whitelist
Dec 21 12:14:50.520: INFO: namespace e2e-tests-svc-latency-v6l22 deletion completed in 56.378427225s

• [SLOW TEST:101.904 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:14:50.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 21 12:14:50.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wrfnm'
Dec 21 12:14:51.179: INFO: stderr: ""
Dec 21 12:14:51.179: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 21 12:14:52.197: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:52.197: INFO: Found 0 / 1
Dec 21 12:14:53.204: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:53.204: INFO: Found 0 / 1
Dec 21 12:14:54.197: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:54.197: INFO: Found 0 / 1
Dec 21 12:14:55.197: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:55.197: INFO: Found 0 / 1
Dec 21 12:14:56.917: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:56.917: INFO: Found 0 / 1
Dec 21 12:14:57.197: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:57.197: INFO: Found 0 / 1
Dec 21 12:14:58.194: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:58.194: INFO: Found 0 / 1
Dec 21 12:14:59.209: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:14:59.209: INFO: Found 0 / 1
Dec 21 12:15:00.223: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:15:00.223: INFO: Found 0 / 1
Dec 21 12:15:01.207: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:15:01.207: INFO: Found 1 / 1
Dec 21 12:15:01.207: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 21 12:15:01.219: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:15:01.219: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 21 12:15:01.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7rcv7 --namespace=e2e-tests-kubectl-wrfnm -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 21 12:15:01.398: INFO: stderr: ""
Dec 21 12:15:01.399: INFO: stdout: "pod/redis-master-7rcv7 patched\n"
STEP: checking annotations
Dec 21 12:15:01.407: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:15:01.407: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:15:01.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wrfnm" for this suite.
Dec 21 12:15:25.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:15:25.605: INFO: namespace: e2e-tests-kubectl-wrfnm, resource: bindings, ignored listing per whitelist
Dec 21 12:15:25.642: INFO: namespace e2e-tests-kubectl-wrfnm deletion completed in 24.227043836s

• [SLOW TEST:35.122 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:15:25.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Dec 21 12:15:25.919: INFO: Waiting up to 5m0s for pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005" in namespace "e2e-tests-var-expansion-wgmtw" to be "success or failure"
Dec 21 12:15:25.946: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.324286ms
Dec 21 12:15:27.959: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039112354s
Dec 21 12:15:29.973: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05308818s
Dec 21 12:15:32.415: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.495799926s
Dec 21 12:15:34.455: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535886715s
Dec 21 12:15:36.472: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.552364426s
STEP: Saw pod success
Dec 21 12:15:36.472: INFO: Pod "var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:15:36.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 12:15:36.836: INFO: Waiting for pod var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:15:37.286: INFO: Pod var-expansion-91d0ed4d-23eb-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:15:37.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wgmtw" for this suite.
Dec 21 12:15:43.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:15:43.702: INFO: namespace: e2e-tests-var-expansion-wgmtw, resource: bindings, ignored listing per whitelist
Dec 21 12:15:43.915: INFO: namespace e2e-tests-var-expansion-wgmtw deletion completed in 6.598889698s

• [SLOW TEST:18.273 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:15:43.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 21 12:15:44.277: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:16:06.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-l2x8j" for this suite.
Dec 21 12:16:30.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:16:30.551: INFO: namespace: e2e-tests-init-container-l2x8j, resource: bindings, ignored listing per whitelist
Dec 21 12:16:30.560: INFO: namespace e2e-tests-init-container-l2x8j deletion completed in 24.285270465s

• [SLOW TEST:46.644 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:16:30.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:16:30.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-6v8xq" to be "success or failure"
Dec 21 12:16:30.765: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.861279ms
Dec 21 12:16:32.925: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167995445s
Dec 21 12:16:34.935: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177838047s
Dec 21 12:16:36.957: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199678569s
Dec 21 12:16:39.225: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.468258643s
Dec 21 12:16:41.243: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.486120135s
STEP: Saw pod success
Dec 21 12:16:41.243: INFO: Pod "downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:16:41.251: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:16:41.424: INFO: Waiting for pod downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:16:41.501: INFO: Pod downwardapi-volume-b878cb27-23eb-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:16:41.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6v8xq" for this suite.
Dec 21 12:16:47.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:16:47.692: INFO: namespace: e2e-tests-downward-api-6v8xq, resource: bindings, ignored listing per whitelist
Dec 21 12:16:47.792: INFO: namespace e2e-tests-downward-api-6v8xq deletion completed in 6.264931736s

• [SLOW TEST:17.232 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:16:47.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:16:48.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:16:58.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-nptr4" for this suite.
Dec 21 12:17:43.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:17:43.177: INFO: namespace: e2e-tests-pods-nptr4, resource: bindings, ignored listing per whitelist
Dec 21 12:17:43.194: INFO: namespace e2e-tests-pods-nptr4 deletion completed in 44.197505194s

• [SLOW TEST:55.402 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:17:43.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:17:43.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:17:53.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-f9bnv" for this suite.
Dec 21 12:18:35.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:18:35.709: INFO: namespace: e2e-tests-pods-f9bnv, resource: bindings, ignored listing per whitelist
Dec 21 12:18:35.793: INFO: namespace e2e-tests-pods-f9bnv deletion completed in 42.258448492s

• [SLOW TEST:52.599 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:18:35.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-031da171-23ec-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:18:35.986: INFO: Waiting up to 5m0s for pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-2dssm" to be "success or failure"
Dec 21 12:18:36.074: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 87.685985ms
Dec 21 12:18:38.401: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415479701s
Dec 21 12:18:40.416: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430054748s
Dec 21 12:18:42.577: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591279848s
Dec 21 12:18:45.296: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.310383056s
Dec 21 12:18:47.330: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.343834715s
STEP: Saw pod success
Dec 21 12:18:47.330: INFO: Pod "pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:18:47.337: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 12:18:47.744: INFO: Waiting for pod pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:18:47.832: INFO: Pod pod-secrets-031e1c07-23ec-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:18:47.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2dssm" for this suite.
Dec 21 12:18:53.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:18:53.940: INFO: namespace: e2e-tests-secrets-2dssm, resource: bindings, ignored listing per whitelist
Dec 21 12:18:54.069: INFO: namespace e2e-tests-secrets-2dssm deletion completed in 6.227075008s

• [SLOW TEST:18.276 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:18:54.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-0e095d37-23ec-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:18:54.345: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-mns5s" to be "success or failure"
Dec 21 12:18:54.360: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.367853ms
Dec 21 12:18:56.373: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027885528s
Dec 21 12:18:58.389: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043674518s
Dec 21 12:19:00.411: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066178902s
Dec 21 12:19:02.435: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089688699s
Dec 21 12:19:04.455: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10949831s
STEP: Saw pod success
Dec 21 12:19:04.455: INFO: Pod "pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:19:04.474: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 12:19:05.479: INFO: Waiting for pod pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:19:05.831: INFO: Pod pod-projected-configmaps-0e0ac712-23ec-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:19:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mns5s" for this suite.
Dec 21 12:19:12.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:19:12.154: INFO: namespace: e2e-tests-projected-mns5s, resource: bindings, ignored listing per whitelist
Dec 21 12:19:12.259: INFO: namespace e2e-tests-projected-mns5s deletion completed in 6.38373613s

• [SLOW TEST:18.189 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:19:12.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-18db01f9-23ec-11ea-bbd3-0242ac110005
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-18db01f9-23ec-11ea-bbd3-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:19:26.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-59h52" for this suite.
Dec 21 12:19:50.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:19:51.104: INFO: namespace: e2e-tests-projected-59h52, resource: bindings, ignored listing per whitelist
Dec 21 12:19:51.152: INFO: namespace e2e-tests-projected-59h52 deletion completed in 24.276703963s

• [SLOW TEST:38.894 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:19:51.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 21 12:20:17.535: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:17.536: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:18.070: INFO: Exec stderr: ""
Dec 21 12:20:18.070: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:18.071: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:18.425: INFO: Exec stderr: ""
Dec 21 12:20:18.426: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:18.426: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:18.807: INFO: Exec stderr: ""
Dec 21 12:20:18.807: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:18.807: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:19.208: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 21 12:20:19.208: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:19.208: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:19.541: INFO: Exec stderr: ""
Dec 21 12:20:19.541: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:19.541: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:19.907: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 21 12:20:19.907: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:19.908: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:20.265: INFO: Exec stderr: ""
Dec 21 12:20:20.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:20.266: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:20.815: INFO: Exec stderr: ""
Dec 21 12:20:20.815: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:20.815: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:21.216: INFO: Exec stderr: ""
Dec 21 12:20:21.216: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-wr8c8 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 12:20:21.216: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 12:20:21.502: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:20:21.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-wr8c8" for this suite.
Dec 21 12:21:17.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:21:17.632: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-wr8c8, resource: bindings, ignored listing per whitelist
Dec 21 12:21:17.725: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-wr8c8 deletion completed in 56.211914854s

• [SLOW TEST:86.571 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:21:17.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-63b86443-23ec-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:21:18.078: INFO: Waiting up to 5m0s for pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-hhrpm" to be "success or failure"
Dec 21 12:21:18.090: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131956ms
Dec 21 12:21:20.223: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145479604s
Dec 21 12:21:22.242: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164231075s
Dec 21 12:21:24.326: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248525589s
Dec 21 12:21:26.345: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26774012s
Dec 21 12:21:28.357: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.279061178s
STEP: Saw pod success
Dec 21 12:21:28.357: INFO: Pod "pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:21:28.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 12:21:28.662: INFO: Waiting for pod pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:21:28.685: INFO: Pod pod-configmaps-63b9d945-23ec-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:21:28.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hhrpm" for this suite.
Dec 21 12:21:34.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:21:34.992: INFO: namespace: e2e-tests-configmap-hhrpm, resource: bindings, ignored listing per whitelist
Dec 21 12:21:35.275: INFO: namespace e2e-tests-configmap-hhrpm deletion completed in 6.565907182s

• [SLOW TEST:17.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:21:35.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 21 12:21:35.542: INFO: Waiting up to 5m0s for pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-jcpd8" to be "success or failure"
Dec 21 12:21:35.559: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.171782ms
Dec 21 12:21:38.177: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.635598176s
Dec 21 12:21:40.224: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.682587875s
Dec 21 12:21:42.506: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.964269039s
Dec 21 12:21:44.863: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.321421953s
Dec 21 12:21:47.028: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.486634142s
STEP: Saw pod success
Dec 21 12:21:47.028: INFO: Pod "downward-api-6e230802-23ec-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:21:47.038: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6e230802-23ec-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 12:21:47.657: INFO: Waiting for pod downward-api-6e230802-23ec-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:21:47.664: INFO: Pod downward-api-6e230802-23ec-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:21:47.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jcpd8" for this suite.
Dec 21 12:21:53.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:21:53.921: INFO: namespace: e2e-tests-downward-api-jcpd8, resource: bindings, ignored listing per whitelist
Dec 21 12:21:54.000: INFO: namespace e2e-tests-downward-api-jcpd8 deletion completed in 6.325035655s

• [SLOW TEST:18.724 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:21:54.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-7954d035-23ec-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:21:54.335: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-flqnq" to be "success or failure"
Dec 21 12:21:54.374: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.380818ms
Dec 21 12:21:56.403: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067120735s
Dec 21 12:21:58.422: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086504222s
Dec 21 12:22:00.493: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158038636s
Dec 21 12:22:02.524: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188990959s
Dec 21 12:22:04.544: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.208185988s
STEP: Saw pod success
Dec 21 12:22:04.544: INFO: Pod "pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:22:04.556: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 12:22:05.321: INFO: Waiting for pod pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:22:05.596: INFO: Pod pod-projected-secrets-79562527-23ec-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:22:05.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-flqnq" for this suite.
Dec 21 12:22:11.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:22:11.720: INFO: namespace: e2e-tests-projected-flqnq, resource: bindings, ignored listing per whitelist
Dec 21 12:22:12.037: INFO: namespace e2e-tests-projected-flqnq deletion completed in 6.426012411s

• [SLOW TEST:18.036 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:22:12.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Dec 21 12:22:12.460: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:22:29.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-w8lx9" for this suite.
Dec 21 12:22:35.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:22:35.863: INFO: namespace: e2e-tests-init-container-w8lx9, resource: bindings, ignored listing per whitelist
Dec 21 12:22:35.891: INFO: namespace e2e-tests-init-container-w8lx9 deletion completed in 6.243429369s

• [SLOW TEST:23.854 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:22:35.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:23:08.646: INFO: Container started at 2019-12-21 12:22:45 +0000 UTC, pod became ready at 2019-12-21 12:23:07 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:23:08.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-ljmpc" for this suite.
Dec 21 12:23:32.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:23:32.750: INFO: namespace: e2e-tests-container-probe-ljmpc, resource: bindings, ignored listing per whitelist
Dec 21 12:23:32.957: INFO: namespace e2e-tests-container-probe-ljmpc deletion completed in 24.293150082s

• [SLOW TEST:57.066 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:23:32.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-5c5w
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 12:23:33.471: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5c5w" in namespace "e2e-tests-subpath-4ndpp" to be "success or failure"
Dec 21 12:23:33.656: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 184.774401ms
Dec 21 12:23:35.738: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266700873s
Dec 21 12:23:37.757: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285453216s
Dec 21 12:23:39.772: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299952497s
Dec 21 12:23:41.789: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317062935s
Dec 21 12:23:43.982: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.510379096s
Dec 21 12:23:46.046: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 12.574417242s
Dec 21 12:23:48.062: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 14.590656651s
Dec 21 12:23:50.080: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Pending", Reason="", readiness=false. Elapsed: 16.60813842s
Dec 21 12:23:52.093: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 18.621803003s
Dec 21 12:23:54.107: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 20.634924559s
Dec 21 12:23:56.134: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 22.662345527s
Dec 21 12:23:58.158: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 24.686353441s
Dec 21 12:24:00.179: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 26.707320178s
Dec 21 12:24:02.192: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 28.720128298s
Dec 21 12:24:04.203: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 30.731522048s
Dec 21 12:24:06.659: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Running", Reason="", readiness=false. Elapsed: 33.186963876s
Dec 21 12:24:08.713: INFO: Pod "pod-subpath-test-secret-5c5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.241388381s
STEP: Saw pod success
Dec 21 12:24:08.713: INFO: Pod "pod-subpath-test-secret-5c5w" satisfied condition "success or failure"
Dec 21 12:24:08.726: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-5c5w container test-container-subpath-secret-5c5w: 
STEP: delete the pod
Dec 21 12:24:08.860: INFO: Waiting for pod pod-subpath-test-secret-5c5w to disappear
Dec 21 12:24:09.772: INFO: Pod pod-subpath-test-secret-5c5w no longer exists
STEP: Deleting pod pod-subpath-test-secret-5c5w
Dec 21 12:24:09.773: INFO: Deleting pod "pod-subpath-test-secret-5c5w" in namespace "e2e-tests-subpath-4ndpp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:24:09.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-4ndpp" for this suite.
Dec 21 12:24:15.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:24:16.059: INFO: namespace: e2e-tests-subpath-4ndpp, resource: bindings, ignored listing per whitelist
Dec 21 12:24:16.170: INFO: namespace e2e-tests-subpath-4ndpp deletion completed in 6.362408059s

• [SLOW TEST:43.212 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:24:16.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005
Dec 21 12:24:16.513: INFO: Pod name my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005: Found 0 pods out of 1
Dec 21 12:24:21.547: INFO: Pod name my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005: Found 1 pods out of 1
Dec 21 12:24:21.547: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005" are running
Dec 21 12:24:27.572: INFO: Pod "my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005-htd7h" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 12:24:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 12:24:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 12:24:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-21 12:24:16 +0000 UTC Reason: Message:}])
Dec 21 12:24:27.573: INFO: Trying to dial the pod
Dec 21 12:24:32.664: INFO: Controller my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005: Got expected result from replica 1 [my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005-htd7h]: "my-hostname-basic-ce05bbb6-23ec-11ea-bbd3-0242ac110005-htd7h", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:24:32.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-br9v6" for this suite.
Dec 21 12:24:40.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:24:40.833: INFO: namespace: e2e-tests-replication-controller-br9v6, resource: bindings, ignored listing per whitelist
Dec 21 12:24:40.913: INFO: namespace e2e-tests-replication-controller-br9v6 deletion completed in 8.207516315s

• [SLOW TEST:24.742 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:24:40.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 21 12:24:53.688: INFO: Successfully updated pod "pod-update-dcb26349-23ec-11ea-bbd3-0242ac110005"
STEP: verifying the updated pod is in kubernetes
Dec 21 12:24:53.704: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:24:53.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dj5kg" for this suite.
Dec 21 12:25:17.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:25:17.914: INFO: namespace: e2e-tests-pods-dj5kg, resource: bindings, ignored listing per whitelist
Dec 21 12:25:18.034: INFO: namespace e2e-tests-pods-dj5kg deletion completed in 24.32155236s

• [SLOW TEST:37.122 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:25:18.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:25:18.559: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 21 12:25:18.639: INFO: Number of nodes with available pods: 0
Dec 21 12:25:18.640: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 21 12:25:18.714: INFO: Number of nodes with available pods: 0
Dec 21 12:25:18.714: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:20.112: INFO: Number of nodes with available pods: 0
Dec 21 12:25:20.112: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:21.108: INFO: Number of nodes with available pods: 0
Dec 21 12:25:21.109: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:21.730: INFO: Number of nodes with available pods: 0
Dec 21 12:25:21.730: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:22.759: INFO: Number of nodes with available pods: 0
Dec 21 12:25:22.759: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:23.734: INFO: Number of nodes with available pods: 0
Dec 21 12:25:23.734: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:25.328: INFO: Number of nodes with available pods: 0
Dec 21 12:25:25.328: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:26.006: INFO: Number of nodes with available pods: 0
Dec 21 12:25:26.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:26.732: INFO: Number of nodes with available pods: 0
Dec 21 12:25:26.732: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:27.733: INFO: Number of nodes with available pods: 0
Dec 21 12:25:27.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:28.751: INFO: Number of nodes with available pods: 1
Dec 21 12:25:28.751: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 21 12:25:28.939: INFO: Number of nodes with available pods: 1
Dec 21 12:25:28.940: INFO: Number of running nodes: 0, number of available pods: 1
Dec 21 12:25:29.959: INFO: Number of nodes with available pods: 0
Dec 21 12:25:29.959: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 21 12:25:29.997: INFO: Number of nodes with available pods: 0
Dec 21 12:25:29.997: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:31.499: INFO: Number of nodes with available pods: 0
Dec 21 12:25:31.499: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:32.026: INFO: Number of nodes with available pods: 0
Dec 21 12:25:32.026: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:33.011: INFO: Number of nodes with available pods: 0
Dec 21 12:25:33.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:34.046: INFO: Number of nodes with available pods: 0
Dec 21 12:25:34.046: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:35.019: INFO: Number of nodes with available pods: 0
Dec 21 12:25:35.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:36.052: INFO: Number of nodes with available pods: 0
Dec 21 12:25:36.052: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:37.024: INFO: Number of nodes with available pods: 0
Dec 21 12:25:37.024: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:38.007: INFO: Number of nodes with available pods: 0
Dec 21 12:25:38.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:39.021: INFO: Number of nodes with available pods: 0
Dec 21 12:25:39.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:40.028: INFO: Number of nodes with available pods: 0
Dec 21 12:25:40.028: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:41.010: INFO: Number of nodes with available pods: 0
Dec 21 12:25:41.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:42.011: INFO: Number of nodes with available pods: 0
Dec 21 12:25:42.011: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:43.055: INFO: Number of nodes with available pods: 0
Dec 21 12:25:43.055: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:44.283: INFO: Number of nodes with available pods: 0
Dec 21 12:25:44.283: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:45.029: INFO: Number of nodes with available pods: 0
Dec 21 12:25:45.029: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:46.019: INFO: Number of nodes with available pods: 0
Dec 21 12:25:46.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:47.033: INFO: Number of nodes with available pods: 0
Dec 21 12:25:47.033: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:48.881: INFO: Number of nodes with available pods: 0
Dec 21 12:25:48.881: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:49.357: INFO: Number of nodes with available pods: 0
Dec 21 12:25:49.357: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:50.018: INFO: Number of nodes with available pods: 0
Dec 21 12:25:50.018: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:25:51.073: INFO: Number of nodes with available pods: 1
Dec 21 12:25:51.073: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wz2ht, will wait for the garbage collector to delete the pods
Dec 21 12:25:51.148: INFO: Deleting DaemonSet.extensions daemon-set took: 12.441234ms
Dec 21 12:25:51.348: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.384959ms
Dec 21 12:26:02.712: INFO: Number of nodes with available pods: 0
Dec 21 12:26:02.712: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 12:26:02.724: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wz2ht/daemonsets","resourceVersion":"15568070"},"items":null}

Dec 21 12:26:02.734: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wz2ht/pods","resourceVersion":"15568070"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:26:02.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-wz2ht" for this suite.
Dec 21 12:26:10.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:26:11.004: INFO: namespace: e2e-tests-daemonsets-wz2ht, resource: bindings, ignored listing per whitelist
Dec 21 12:26:11.081: INFO: namespace e2e-tests-daemonsets-wz2ht deletion completed in 8.195421286s

• [SLOW TEST:53.046 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:26:11.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 21 12:26:22.151: INFO: Successfully updated pod "annotationupdate1294fcaf-23ed-11ea-bbd3-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:26:24.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v6stx" for this suite.
Dec 21 12:26:48.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:26:48.608: INFO: namespace: e2e-tests-projected-v6stx, resource: bindings, ignored listing per whitelist
Dec 21 12:26:48.793: INFO: namespace e2e-tests-projected-v6stx deletion completed in 24.442130887s

• [SLOW TEST:37.712 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:26:48.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-290b7eeb-23ed-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:26:49.191: INFO: Waiting up to 5m0s for pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-bgncb" to be "success or failure"
Dec 21 12:26:49.230: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.983223ms
Dec 21 12:26:51.247: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055193522s
Dec 21 12:26:53.266: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074676459s
Dec 21 12:26:55.286: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094472337s
Dec 21 12:26:57.298: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106421116s
Dec 21 12:26:59.307: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.115679782s
STEP: Saw pod success
Dec 21 12:26:59.307: INFO: Pod "pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:26:59.310: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 12:27:00.496: INFO: Waiting for pod pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:27:00.620: INFO: Pod pod-configmaps-290d841e-23ed-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:27:00.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bgncb" for this suite.
Dec 21 12:27:06.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:27:07.021: INFO: namespace: e2e-tests-configmap-bgncb, resource: bindings, ignored listing per whitelist
Dec 21 12:27:07.031: INFO: namespace e2e-tests-configmap-bgncb deletion completed in 6.391979863s

• [SLOW TEST:18.237 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:27:07.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Dec 21 12:27:07.288: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 21 12:27:07.420: INFO: Waiting for terminating namespaces to be deleted...
Dec 21 12:27:07.435: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Dec 21 12:27:07.465: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 12:27:07.465: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 12:27:07.465: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 12:27:07.465: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 21 12:27:07.465: INFO: 	Container coredns ready: true, restart count 0
Dec 21 12:27:07.466: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Dec 21 12:27:07.466: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 21 12:27:07.466: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Dec 21 12:27:07.466: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Dec 21 12:27:07.466: INFO: 	Container weave ready: true, restart count 0
Dec 21 12:27:07.466: INFO: 	Container weave-npc ready: true, restart count 0
Dec 21 12:27:07.466: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Dec 21 12:27:07.466: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e26298b38d799a], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:27:08.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-c65dr" for this suite.
Dec 21 12:27:16.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:27:16.771: INFO: namespace: e2e-tests-sched-pred-c65dr, resource: bindings, ignored listing per whitelist
Dec 21 12:27:16.773: INFO: namespace e2e-tests-sched-pred-c65dr deletion completed in 8.205455351s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:9.742 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:27:16.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-399c9f1f-23ed-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:27:16.976: INFO: Waiting up to 5m0s for pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-9bhwm" to be "success or failure"
Dec 21 12:27:16.984: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077261ms
Dec 21 12:27:19.387: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41084781s
Dec 21 12:27:21.408: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431969678s
Dec 21 12:27:23.920: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.943511973s
Dec 21 12:27:26.103: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.127354657s
Dec 21 12:27:28.124: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.147944571s
STEP: Saw pod success
Dec 21 12:27:28.124: INFO: Pod "pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:27:28.132: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 12:27:28.208: INFO: Waiting for pod pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:27:28.213: INFO: Pod pod-secrets-399f136b-23ed-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:27:28.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9bhwm" for this suite.
Dec 21 12:27:34.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:27:34.456: INFO: namespace: e2e-tests-secrets-9bhwm, resource: bindings, ignored listing per whitelist
Dec 21 12:27:34.540: INFO: namespace e2e-tests-secrets-9bhwm deletion completed in 6.32091595s

• [SLOW TEST:17.766 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:27:34.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-44453ae3-23ed-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:27:34.835: INFO: Waiting up to 5m0s for pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-mpnl4" to be "success or failure"
Dec 21 12:27:34.926: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 90.944466ms
Dec 21 12:27:36.944: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109176323s
Dec 21 12:27:38.978: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142801457s
Dec 21 12:27:41.052: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216815693s
Dec 21 12:27:43.899: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.063725097s
Dec 21 12:27:45.933: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.09763162s
STEP: Saw pod success
Dec 21 12:27:45.933: INFO: Pod "pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:27:45.963: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 12:27:46.540: INFO: Waiting for pod pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:27:46.560: INFO: Pod pod-configmaps-4447a5c8-23ed-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:27:46.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mpnl4" for this suite.
Dec 21 12:27:52.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:27:52.801: INFO: namespace: e2e-tests-configmap-mpnl4, resource: bindings, ignored listing per whitelist
Dec 21 12:27:52.902: INFO: namespace e2e-tests-configmap-mpnl4 deletion completed in 6.234656747s

• [SLOW TEST:18.361 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:27:52.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:28:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-rvg8q" for this suite.
Dec 21 12:28:51.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:28:51.488: INFO: namespace: e2e-tests-kubelet-test-rvg8q, resource: bindings, ignored listing per whitelist
Dec 21 12:28:51.557: INFO: namespace e2e-tests-kubelet-test-rvg8q deletion completed in 48.195481101s

• [SLOW TEST:58.655 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:28:51.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Dec 21 12:29:02.044: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-72304ac9-23ed-11ea-bbd3-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-zbxmb", SelfLink:"/api/v1/namespaces/e2e-tests-pods-zbxmb/pods/pod-submit-remove-72304ac9-23ed-11ea-bbd3-0242ac110005", UID:"723e58cb-23ed-11ea-a994-fa163e34d433", ResourceVersion:"15568448", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712528131, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"821099192"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-q55zk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b8aec0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-q55zk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cf1a88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00171fc20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000cf1ac0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000cf1ae0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000cf1ae8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000cf1aec)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712528132, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712528141, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712528141, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712528131, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0017b9580), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0017b95a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://6ae98744e58c4f1703a56ec54d9abfb8644450abbd65227903a9827c0855bb3c"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:29:12.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zbxmb" for this suite.
Dec 21 12:29:18.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:29:18.753: INFO: namespace: e2e-tests-pods-zbxmb, resource: bindings, ignored listing per whitelist
Dec 21 12:29:18.794: INFO: namespace e2e-tests-pods-zbxmb deletion completed in 6.206568022s

• [SLOW TEST:27.237 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:29:18.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 21 12:29:18.997: INFO: Waiting up to 5m0s for pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-2nkq4" to be "success or failure"
Dec 21 12:29:19.009: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.565293ms
Dec 21 12:29:21.394: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396687538s
Dec 21 12:29:23.424: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.426184252s
Dec 21 12:29:26.127: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.128973107s
Dec 21 12:29:28.150: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.152832566s
Dec 21 12:29:30.169: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.171240109s
STEP: Saw pod success
Dec 21 12:29:30.169: INFO: Pod "pod-8260cbe6-23ed-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:29:30.186: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8260cbe6-23ed-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:29:30.300: INFO: Waiting for pod pod-8260cbe6-23ed-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:29:30.310: INFO: Pod pod-8260cbe6-23ed-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:29:30.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2nkq4" for this suite.
Dec 21 12:29:36.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:29:36.755: INFO: namespace: e2e-tests-emptydir-2nkq4, resource: bindings, ignored listing per whitelist
Dec 21 12:29:36.818: INFO: namespace e2e-tests-emptydir-2nkq4 deletion completed in 6.50094211s

• [SLOW TEST:18.024 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:29:36.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lg8kn
Dec 21 12:29:47.055: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lg8kn
STEP: checking the pod's current state and verifying that restartCount is present
Dec 21 12:29:47.059: INFO: Initial restart count of pod liveness-http is 0
Dec 21 12:30:01.237: INFO: Restart count of pod e2e-tests-container-probe-lg8kn/liveness-http is now 1 (14.177937313s elapsed)
Dec 21 12:30:21.908: INFO: Restart count of pod e2e-tests-container-probe-lg8kn/liveness-http is now 2 (34.848515867s elapsed)
Dec 21 12:30:42.083: INFO: Restart count of pod e2e-tests-container-probe-lg8kn/liveness-http is now 3 (55.02369237s elapsed)
Dec 21 12:31:00.266: INFO: Restart count of pod e2e-tests-container-probe-lg8kn/liveness-http is now 4 (1m13.207343198s elapsed)
Dec 21 12:32:13.069: INFO: Restart count of pod e2e-tests-container-probe-lg8kn/liveness-http is now 5 (2m26.009881713s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:32:13.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-lg8kn" for this suite.
Dec 21 12:32:19.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:32:19.325: INFO: namespace: e2e-tests-container-probe-lg8kn, resource: bindings, ignored listing per whitelist
Dec 21 12:32:19.453: INFO: namespace e2e-tests-container-probe-lg8kn deletion completed in 6.20008608s

• [SLOW TEST:162.635 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:32:19.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 21 12:32:19.950: INFO: Number of nodes with available pods: 0
Dec 21 12:32:19.950: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:21.462: INFO: Number of nodes with available pods: 0
Dec 21 12:32:21.462: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:21.986: INFO: Number of nodes with available pods: 0
Dec 21 12:32:21.986: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:22.992: INFO: Number of nodes with available pods: 0
Dec 21 12:32:22.992: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:23.978: INFO: Number of nodes with available pods: 0
Dec 21 12:32:23.979: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:25.936: INFO: Number of nodes with available pods: 0
Dec 21 12:32:25.936: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:25.974: INFO: Number of nodes with available pods: 0
Dec 21 12:32:25.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:27.074: INFO: Number of nodes with available pods: 0
Dec 21 12:32:27.074: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:27.974: INFO: Number of nodes with available pods: 0
Dec 21 12:32:27.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:29.003: INFO: Number of nodes with available pods: 1
Dec 21 12:32:29.003: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 21 12:32:29.063: INFO: Number of nodes with available pods: 0
Dec 21 12:32:29.063: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:30.085: INFO: Number of nodes with available pods: 0
Dec 21 12:32:30.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:31.102: INFO: Number of nodes with available pods: 0
Dec 21 12:32:31.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:32.119: INFO: Number of nodes with available pods: 0
Dec 21 12:32:32.119: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:33.084: INFO: Number of nodes with available pods: 0
Dec 21 12:32:33.084: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:34.081: INFO: Number of nodes with available pods: 0
Dec 21 12:32:34.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:35.097: INFO: Number of nodes with available pods: 0
Dec 21 12:32:35.097: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:36.089: INFO: Number of nodes with available pods: 0
Dec 21 12:32:36.090: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:37.086: INFO: Number of nodes with available pods: 0
Dec 21 12:32:37.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:38.086: INFO: Number of nodes with available pods: 0
Dec 21 12:32:38.087: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:39.088: INFO: Number of nodes with available pods: 0
Dec 21 12:32:39.088: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:40.087: INFO: Number of nodes with available pods: 0
Dec 21 12:32:40.087: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:41.090: INFO: Number of nodes with available pods: 0
Dec 21 12:32:41.090: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:42.297: INFO: Number of nodes with available pods: 0
Dec 21 12:32:42.297: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:43.089: INFO: Number of nodes with available pods: 0
Dec 21 12:32:43.089: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:44.077: INFO: Number of nodes with available pods: 0
Dec 21 12:32:44.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:45.263: INFO: Number of nodes with available pods: 0
Dec 21 12:32:45.263: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:46.118: INFO: Number of nodes with available pods: 0
Dec 21 12:32:46.118: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:47.085: INFO: Number of nodes with available pods: 0
Dec 21 12:32:47.085: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:48.086: INFO: Number of nodes with available pods: 0
Dec 21 12:32:48.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:49.496: INFO: Number of nodes with available pods: 0
Dec 21 12:32:49.496: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:50.440: INFO: Number of nodes with available pods: 0
Dec 21 12:32:50.440: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:51.075: INFO: Number of nodes with available pods: 0
Dec 21 12:32:51.075: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:52.091: INFO: Number of nodes with available pods: 0
Dec 21 12:32:52.091: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 12:32:53.086: INFO: Number of nodes with available pods: 1
Dec 21 12:32:53.087: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-qmbqf, will wait for the garbage collector to delete the pods
Dec 21 12:32:53.213: INFO: Deleting DaemonSet.extensions daemon-set took: 20.04377ms
Dec 21 12:32:53.413: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.36505ms
Dec 21 12:33:00.883: INFO: Number of nodes with available pods: 0
Dec 21 12:33:00.883: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 12:33:00.891: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-qmbqf/daemonsets","resourceVersion":"15568863"},"items":null}

Dec 21 12:33:00.896: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-qmbqf/pods","resourceVersion":"15568863"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:33:00.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-qmbqf" for this suite.
Dec 21 12:33:08.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:33:08.964: INFO: namespace: e2e-tests-daemonsets-qmbqf, resource: bindings, ignored listing per whitelist
Dec 21 12:33:09.136: INFO: namespace e2e-tests-daemonsets-qmbqf deletion completed in 8.225919366s

• [SLOW TEST:49.682 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:33:09.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-76wb
STEP: Creating a pod to test atomic-volume-subpath
Dec 21 12:33:09.570: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-76wb" in namespace "e2e-tests-subpath-27zst" to be "success or failure"
Dec 21 12:33:09.614: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 44.00444ms
Dec 21 12:33:11.671: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100225933s
Dec 21 12:33:13.685: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11486976s
Dec 21 12:33:16.133: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562867008s
Dec 21 12:33:19.005: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.434565133s
Dec 21 12:33:21.019: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.448981081s
Dec 21 12:33:23.073: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.502841661s
Dec 21 12:33:25.085: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.514665015s
Dec 21 12:33:27.099: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 17.52807308s
Dec 21 12:33:29.115: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 19.544917222s
Dec 21 12:33:31.148: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 21.577944667s
Dec 21 12:33:33.170: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 23.599061973s
Dec 21 12:33:35.186: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 25.615221011s
Dec 21 12:33:37.226: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 27.65533611s
Dec 21 12:33:39.246: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 29.675058587s
Dec 21 12:33:41.266: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 31.695554496s
Dec 21 12:33:43.289: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Running", Reason="", readiness=false. Elapsed: 33.718144903s
Dec 21 12:33:45.313: INFO: Pod "pod-subpath-test-projected-76wb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.742390168s
STEP: Saw pod success
Dec 21 12:33:45.313: INFO: Pod "pod-subpath-test-projected-76wb" satisfied condition "success or failure"
Dec 21 12:33:45.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-76wb container test-container-subpath-projected-76wb: 
STEP: delete the pod
Dec 21 12:33:45.608: INFO: Waiting for pod pod-subpath-test-projected-76wb to disappear
Dec 21 12:33:45.918: INFO: Pod pod-subpath-test-projected-76wb no longer exists
STEP: Deleting pod pod-subpath-test-projected-76wb
Dec 21 12:33:45.919: INFO: Deleting pod "pod-subpath-test-projected-76wb" in namespace "e2e-tests-subpath-27zst"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:33:45.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-27zst" for this suite.
Dec 21 12:33:51.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:33:52.079: INFO: namespace: e2e-tests-subpath-27zst, resource: bindings, ignored listing per whitelist
Dec 21 12:33:52.209: INFO: namespace e2e-tests-subpath-27zst deletion completed in 6.267338785s

• [SLOW TEST:43.073 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:33:52.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-255a5990-23ee-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:33:52.473: INFO: Waiting up to 5m0s for pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-mpgvh" to be "success or failure"
Dec 21 12:33:52.618: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.81511ms
Dec 21 12:33:54.638: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164779629s
Dec 21 12:33:56.655: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181285519s
Dec 21 12:33:59.006: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532818034s
Dec 21 12:34:01.018: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544498627s
Dec 21 12:34:03.033: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.559731219s
STEP: Saw pod success
Dec 21 12:34:03.033: INFO: Pod "pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:34:03.038: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005 container secret-env-test: 
STEP: delete the pod
Dec 21 12:34:03.198: INFO: Waiting for pod pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:34:03.272: INFO: Pod pod-secrets-255c7596-23ee-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:34:03.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-mpgvh" for this suite.
Dec 21 12:34:09.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:34:09.443: INFO: namespace: e2e-tests-secrets-mpgvh, resource: bindings, ignored listing per whitelist
Dec 21 12:34:09.507: INFO: namespace e2e-tests-secrets-mpgvh deletion completed in 6.220384273s

• [SLOW TEST:17.297 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:34:09.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-2faaff9a-23ee-11ea-bbd3-0242ac110005
STEP: Creating secret with name secret-projected-all-test-volume-2faaff83-23ee-11ea-bbd3-0242ac110005
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 21 12:34:09.794: INFO: Waiting up to 5m0s for pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-2fltq" to be "success or failure"
Dec 21 12:34:09.815: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.921141ms
Dec 21 12:34:11.835: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040947634s
Dec 21 12:34:13.856: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061328114s
Dec 21 12:34:15.881: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086654108s
Dec 21 12:34:17.927: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133141099s
Dec 21 12:34:19.956: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162065818s
Dec 21 12:34:21.976: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.181550715s
STEP: Saw pod success
Dec 21 12:34:21.976: INFO: Pod "projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:34:21.984: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005 container projected-all-volume-test: 
STEP: delete the pod
Dec 21 12:34:22.805: INFO: Waiting for pod projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:34:22.818: INFO: Pod projected-volume-2faafd20-23ee-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:34:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2fltq" for this suite.
Dec 21 12:34:31.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:34:31.100: INFO: namespace: e2e-tests-projected-2fltq, resource: bindings, ignored listing per whitelist
Dec 21 12:34:31.189: INFO: namespace e2e-tests-projected-2fltq deletion completed in 8.222546694s

• [SLOW TEST:21.681 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:34:31.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:34:31.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-g84qv" to be "success or failure"
Dec 21 12:34:31.544: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 83.818041ms
Dec 21 12:34:34.021: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.560927568s
Dec 21 12:34:36.035: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575613398s
Dec 21 12:34:38.150: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690106172s
Dec 21 12:34:40.279: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819303356s
Dec 21 12:34:42.304: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.844385765s
STEP: Saw pod success
Dec 21 12:34:42.304: INFO: Pod "downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:34:42.319: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:34:42.705: INFO: Waiting for pod downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:34:42.731: INFO: Pod downwardapi-volume-3c9e5610-23ee-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:34:42.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g84qv" for this suite.
Dec 21 12:34:48.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:34:48.993: INFO: namespace: e2e-tests-downward-api-g84qv, resource: bindings, ignored listing per whitelist
Dec 21 12:34:49.001: INFO: namespace e2e-tests-downward-api-g84qv deletion completed in 6.26488297s

• [SLOW TEST:17.811 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:34:49.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-4752ea79-23ee-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:34:49.429: INFO: Waiting up to 5m0s for pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-gm76v" to be "success or failure"
Dec 21 12:34:49.435: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119406ms
Dec 21 12:34:51.930: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.500875336s
Dec 21 12:34:53.948: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.518543613s
Dec 21 12:34:56.192: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.763075216s
Dec 21 12:34:58.205: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.775610911s
Dec 21 12:35:00.435: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.005370867s
Dec 21 12:35:02.463: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.033542345s
STEP: Saw pod success
Dec 21 12:35:02.463: INFO: Pod "pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:35:02.478: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Dec 21 12:35:02.706: INFO: Waiting for pod pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:35:02.835: INFO: Pod pod-configmaps-47545343-23ee-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:35:02.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gm76v" for this suite.
Dec 21 12:35:08.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:35:08.985: INFO: namespace: e2e-tests-configmap-gm76v, resource: bindings, ignored listing per whitelist
Dec 21 12:35:09.052: INFO: namespace e2e-tests-configmap-gm76v deletion completed in 6.207053425s

• [SLOW TEST:20.051 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:35:09.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Dec 21 12:35:09.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-5p82m'
Dec 21 12:35:11.598: INFO: stderr: ""
Dec 21 12:35:11.598: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Dec 21 12:35:13.110: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:13.110: INFO: Found 0 / 1
Dec 21 12:35:13.619: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:13.619: INFO: Found 0 / 1
Dec 21 12:35:14.637: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:14.637: INFO: Found 0 / 1
Dec 21 12:35:15.617: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:15.617: INFO: Found 0 / 1
Dec 21 12:35:17.377: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:17.378: INFO: Found 0 / 1
Dec 21 12:35:17.635: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:17.635: INFO: Found 0 / 1
Dec 21 12:35:18.761: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:18.761: INFO: Found 0 / 1
Dec 21 12:35:19.622: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:19.622: INFO: Found 0 / 1
Dec 21 12:35:20.642: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:20.642: INFO: Found 0 / 1
Dec 21 12:35:21.621: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:21.621: INFO: Found 1 / 1
Dec 21 12:35:21.621: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 21 12:35:21.631: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:35:21.631: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 21 12:35:21.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4lkbw redis-master --namespace=e2e-tests-kubectl-5p82m'
Dec 21 12:35:21.842: INFO: stderr: ""
Dec 21 12:35:21.842: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Dec 12:35:19.805 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Dec 12:35:19.805 # Server started, Redis version 3.2.12\n1:M 21 Dec 12:35:19.806 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Dec 12:35:19.806 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 21 12:35:21.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4lkbw redis-master --namespace=e2e-tests-kubectl-5p82m --tail=1'
Dec 21 12:35:22.031: INFO: stderr: ""
Dec 21 12:35:22.031: INFO: stdout: "1:M 21 Dec 12:35:19.806 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 21 12:35:22.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4lkbw redis-master --namespace=e2e-tests-kubectl-5p82m --limit-bytes=1'
Dec 21 12:35:22.239: INFO: stderr: ""
Dec 21 12:35:22.239: INFO: stdout: " "
STEP: exposing timestamps
Dec 21 12:35:22.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4lkbw redis-master --namespace=e2e-tests-kubectl-5p82m --tail=1 --timestamps'
Dec 21 12:35:22.388: INFO: stderr: ""
Dec 21 12:35:22.388: INFO: stdout: "2019-12-21T12:35:19.806598361Z 1:M 21 Dec 12:35:19.806 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 21 12:35:24.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4lkbw redis-master --namespace=e2e-tests-kubectl-5p82m --since=1s'
Dec 21 12:35:25.057: INFO: stderr: ""
Dec 21 12:35:25.057: INFO: stdout: ""
Dec 21 12:35:25.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-4lkbw redis-master --namespace=e2e-tests-kubectl-5p82m --since=24h'
Dec 21 12:35:25.211: INFO: stderr: ""
Dec 21 12:35:25.211: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Dec 12:35:19.805 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Dec 12:35:19.805 # Server started, Redis version 3.2.12\n1:M 21 Dec 12:35:19.806 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Dec 12:35:19.806 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Dec 21 12:35:25.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-5p82m'
Dec 21 12:35:25.354: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 12:35:25.354: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 21 12:35:25.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-5p82m'
Dec 21 12:35:25.523: INFO: stderr: "No resources found.\n"
Dec 21 12:35:25.523: INFO: stdout: ""
Dec 21 12:35:25.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-5p82m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 12:35:25.681: INFO: stderr: ""
Dec 21 12:35:25.681: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:35:25.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5p82m" for this suite.
Dec 21 12:35:49.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:35:49.898: INFO: namespace: e2e-tests-kubectl-5p82m, resource: bindings, ignored listing per whitelist
Dec 21 12:35:50.045: INFO: namespace e2e-tests-kubectl-5p82m deletion completed in 24.349679981s

• [SLOW TEST:40.993 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:35:50.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:35:58.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-mnsz5" for this suite.
Dec 21 12:36:44.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:36:44.491: INFO: namespace: e2e-tests-kubelet-test-mnsz5, resource: bindings, ignored listing per whitelist
Dec 21 12:36:44.579: INFO: namespace e2e-tests-kubelet-test-mnsz5 deletion completed in 46.281957008s

• [SLOW TEST:54.534 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:36:44.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 21 12:36:46.815: INFO: Pod name wrapped-volume-race-8d48bdc4-23ee-11ea-bbd3-0242ac110005: Found 0 pods out of 5
Dec 21 12:36:51.848: INFO: Pod name wrapped-volume-race-8d48bdc4-23ee-11ea-bbd3-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8d48bdc4-23ee-11ea-bbd3-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6zhwk, will wait for the garbage collector to delete the pods
Dec 21 12:38:56.028: INFO: Deleting ReplicationController wrapped-volume-race-8d48bdc4-23ee-11ea-bbd3-0242ac110005 took: 33.560433ms
Dec 21 12:38:56.429: INFO: Terminating ReplicationController wrapped-volume-race-8d48bdc4-23ee-11ea-bbd3-0242ac110005 pods took: 401.104205ms
STEP: Creating RC which spawns configmap-volume pods
Dec 21 12:39:43.383: INFO: Pod name wrapped-volume-race-f675639d-23ee-11ea-bbd3-0242ac110005: Found 0 pods out of 5
Dec 21 12:39:48.416: INFO: Pod name wrapped-volume-race-f675639d-23ee-11ea-bbd3-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f675639d-23ee-11ea-bbd3-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6zhwk, will wait for the garbage collector to delete the pods
Dec 21 12:42:04.724: INFO: Deleting ReplicationController wrapped-volume-race-f675639d-23ee-11ea-bbd3-0242ac110005 took: 25.509465ms
Dec 21 12:42:05.125: INFO: Terminating ReplicationController wrapped-volume-race-f675639d-23ee-11ea-bbd3-0242ac110005 pods took: 400.551942ms
STEP: Creating RC which spawns configmap-volume pods
Dec 21 12:42:53.040: INFO: Pod name wrapped-volume-race-6786c192-23ef-11ea-bbd3-0242ac110005: Found 0 pods out of 5
Dec 21 12:42:58.060: INFO: Pod name wrapped-volume-race-6786c192-23ef-11ea-bbd3-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6786c192-23ef-11ea-bbd3-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6zhwk, will wait for the garbage collector to delete the pods
Dec 21 12:45:02.721: INFO: Deleting ReplicationController wrapped-volume-race-6786c192-23ef-11ea-bbd3-0242ac110005 took: 31.564117ms
Dec 21 12:45:03.122: INFO: Terminating ReplicationController wrapped-volume-race-6786c192-23ef-11ea-bbd3-0242ac110005 pods took: 400.767914ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:45:55.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-6zhwk" for this suite.
Dec 21 12:46:06.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:46:06.276: INFO: namespace: e2e-tests-emptydir-wrapper-6zhwk, resource: bindings, ignored listing per whitelist
Dec 21 12:46:06.438: INFO: namespace e2e-tests-emptydir-wrapper-6zhwk deletion completed in 10.429814668s

• [SLOW TEST:561.858 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:46:06.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-db03f4b6-23ef-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 12:46:06.927: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-djdzd" to be "success or failure"
Dec 21 12:46:06.961: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 33.472575ms
Dec 21 12:46:09.669: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741802483s
Dec 21 12:46:11.686: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.758528917s
Dec 21 12:46:13.709: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.781654904s
Dec 21 12:46:16.334: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.406359513s
Dec 21 12:46:18.408: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.481050462s
Dec 21 12:46:20.430: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.502929349s
STEP: Saw pod success
Dec 21 12:46:20.430: INFO: Pod "pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:46:20.436: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 12:46:21.375: INFO: Waiting for pod pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:46:21.746: INFO: Pod pod-projected-configmaps-db055e0f-23ef-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:46:21.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-djdzd" for this suite.
Dec 21 12:46:27.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:46:27.895: INFO: namespace: e2e-tests-projected-djdzd, resource: bindings, ignored listing per whitelist
Dec 21 12:46:27.966: INFO: namespace e2e-tests-projected-djdzd deletion completed in 6.207411913s

• [SLOW TEST:21.528 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:46:27.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:46:28.166: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 21 12:46:33.184: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 21 12:46:39.209: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 21 12:46:39.268: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vdbkp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vdbkp/deployments/test-cleanup-deployment,UID:ee6a0cc1-23ef-11ea-a994-fa163e34d433,ResourceVersion:15570507,Generation:1,CreationTimestamp:2019-12-21 12:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 21 12:46:39.272: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:46:39.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-vdbkp" for this suite.
Dec 21 12:46:47.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:46:48.489: INFO: namespace: e2e-tests-deployment-vdbkp, resource: bindings, ignored listing per whitelist
Dec 21 12:46:48.985: INFO: namespace e2e-tests-deployment-vdbkp deletion completed in 9.550094063s

• [SLOW TEST:21.018 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:46:48.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:46:49.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-t4x68" to be "success or failure"
Dec 21 12:46:49.437: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.254905ms
Dec 21 12:46:51.761: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34725278s
Dec 21 12:46:53.795: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381594413s
Dec 21 12:46:56.249: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.835880203s
Dec 21 12:46:58.351: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937479969s
Dec 21 12:47:00.372: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.958354221s
STEP: Saw pod success
Dec 21 12:47:00.372: INFO: Pod "downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:47:00.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:47:00.643: INFO: Waiting for pod downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:47:00.659: INFO: Pod downwardapi-volume-f4781b76-23ef-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:47:00.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t4x68" for this suite.
Dec 21 12:47:08.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:47:08.945: INFO: namespace: e2e-tests-projected-t4x68, resource: bindings, ignored listing per whitelist
Dec 21 12:47:09.014: INFO: namespace e2e-tests-projected-t4x68 deletion completed in 8.24889424s

• [SLOW TEST:20.029 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:47:09.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 21 12:47:09.435: INFO: Waiting up to 5m0s for pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-qdsv5" to be "success or failure"
Dec 21 12:47:09.466: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.636516ms
Dec 21 12:47:11.792: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356927643s
Dec 21 12:47:13.822: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.38667376s
Dec 21 12:47:15.835: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399526836s
Dec 21 12:47:17.847: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411841717s
Dec 21 12:47:19.919: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.483478911s
STEP: Saw pod success
Dec 21 12:47:19.919: INFO: Pod "downward-api-00585480-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:47:19.933: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-00585480-23f0-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 12:47:20.797: INFO: Waiting for pod downward-api-00585480-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:47:20.811: INFO: Pod downward-api-00585480-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:47:20.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qdsv5" for this suite.
Dec 21 12:47:26.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:47:27.045: INFO: namespace: e2e-tests-downward-api-qdsv5, resource: bindings, ignored listing per whitelist
Dec 21 12:47:27.271: INFO: namespace e2e-tests-downward-api-qdsv5 deletion completed in 6.452585351s

• [SLOW TEST:18.257 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:47:27.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:47:28.011: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-62ls5" to be "success or failure"
Dec 21 12:47:28.057: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 45.826049ms
Dec 21 12:47:30.075: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063220376s
Dec 21 12:47:32.088: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07666707s
Dec 21 12:47:34.115: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103595363s
Dec 21 12:47:36.499: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487375195s
Dec 21 12:47:38.967: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.955760319s
Dec 21 12:47:40.983: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.971585338s
STEP: Saw pod success
Dec 21 12:47:40.983: INFO: Pod "downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:47:40.989: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:47:42.047: INFO: Waiting for pod downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:47:42.106: INFO: Pod downwardapi-volume-0b789b92-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:47:42.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-62ls5" for this suite.
Dec 21 12:47:48.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:47:48.203: INFO: namespace: e2e-tests-downward-api-62ls5, resource: bindings, ignored listing per whitelist
Dec 21 12:47:48.384: INFO: namespace e2e-tests-downward-api-62ls5 deletion completed in 6.266163756s

• [SLOW TEST:21.112 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:47:48.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-17ccc67c-23f0-11ea-bbd3-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-17ccc750-23f0-11ea-bbd3-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-17ccc67c-23f0-11ea-bbd3-0242ac110005
STEP: Updating configmap cm-test-opt-upd-17ccc750-23f0-11ea-bbd3-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-17cccb19-23f0-11ea-bbd3-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:48:09.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sf7vz" for this suite.
Dec 21 12:48:33.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:48:33.465: INFO: namespace: e2e-tests-projected-sf7vz, resource: bindings, ignored listing per whitelist
Dec 21 12:48:33.616: INFO: namespace e2e-tests-projected-sf7vz deletion completed in 24.21612431s

• [SLOW TEST:45.231 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:48:33.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 21 12:48:44.727: INFO: Successfully updated pod "labelsupdate32cfbfcd-23f0-11ea-bbd3-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:48:46.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-khwj2" for this suite.
Dec 21 12:49:10.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:49:11.019: INFO: namespace: e2e-tests-downward-api-khwj2, resource: bindings, ignored listing per whitelist
Dec 21 12:49:11.081: INFO: namespace e2e-tests-downward-api-khwj2 deletion completed in 24.183107129s

• [SLOW TEST:37.464 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:49:11.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:49:18.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-wfl5k" for this suite.
Dec 21 12:49:24.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:49:24.524: INFO: namespace: e2e-tests-namespaces-wfl5k, resource: bindings, ignored listing per whitelist
Dec 21 12:49:24.549: INFO: namespace e2e-tests-namespaces-wfl5k deletion completed in 6.474111357s
STEP: Destroying namespace "e2e-tests-nsdeletetest-cmv2g" for this suite.
Dec 21 12:49:24.559: INFO: Namespace e2e-tests-nsdeletetest-cmv2g was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-scn4g" for this suite.
Dec 21 12:49:30.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:49:30.758: INFO: namespace: e2e-tests-nsdeletetest-scn4g, resource: bindings, ignored listing per whitelist
Dec 21 12:49:30.804: INFO: namespace e2e-tests-nsdeletetest-scn4g deletion completed in 6.244643231s

• [SLOW TEST:19.722 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:49:30.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 12:49:31.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-gxhtx'
Dec 21 12:49:33.301: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 12:49:33.302: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Dec 21 12:49:35.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gxhtx'
Dec 21 12:49:36.002: INFO: stderr: ""
Dec 21 12:49:36.002: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:49:36.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gxhtx" for this suite.
Dec 21 12:49:42.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:49:42.684: INFO: namespace: e2e-tests-kubectl-gxhtx, resource: bindings, ignored listing per whitelist
Dec 21 12:49:42.758: INFO: namespace e2e-tests-kubectl-gxhtx deletion completed in 6.746308773s

• [SLOW TEST:11.954 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:49:42.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-5bf69eb7-23f0-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:49:43.182: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-jtx4r" to be "success or failure"
Dec 21 12:49:43.201: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.503664ms
Dec 21 12:49:45.252: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069500578s
Dec 21 12:49:47.320: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137837457s
Dec 21 12:49:49.556: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373908716s
Dec 21 12:49:51.575: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.392947604s
Dec 21 12:49:53.615: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.432288895s
STEP: Saw pod success
Dec 21 12:49:53.615: INFO: Pod "pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:49:53.628: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 12:49:54.794: INFO: Waiting for pod pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:49:54.996: INFO: Pod pod-projected-secrets-5c0a8b94-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:49:54.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jtx4r" for this suite.
Dec 21 12:50:01.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:50:01.156: INFO: namespace: e2e-tests-projected-jtx4r, resource: bindings, ignored listing per whitelist
Dec 21 12:50:01.218: INFO: namespace e2e-tests-projected-jtx4r deletion completed in 6.192630718s

• [SLOW TEST:18.459 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:50:01.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 21 12:50:01.401: INFO: Waiting up to 5m0s for pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-8wh76" to be "success or failure"
Dec 21 12:50:01.429: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.127183ms
Dec 21 12:50:03.836: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.43453621s
Dec 21 12:50:05.844: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44334687s
Dec 21 12:50:07.875: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474156853s
Dec 21 12:50:11.092: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.691435033s
Dec 21 12:50:13.113: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.712148083s
Dec 21 12:50:15.147: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.745800945s
Dec 21 12:50:17.557: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.156078679s
STEP: Saw pod success
Dec 21 12:50:17.557: INFO: Pod "pod-66e889cd-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:50:17.577: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-66e889cd-23f0-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:50:17.968: INFO: Waiting for pod pod-66e889cd-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:50:17.978: INFO: Pod pod-66e889cd-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:50:17.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8wh76" for this suite.
Dec 21 12:50:24.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:50:24.197: INFO: namespace: e2e-tests-emptydir-8wh76, resource: bindings, ignored listing per whitelist
Dec 21 12:50:24.292: INFO: namespace e2e-tests-emptydir-8wh76 deletion completed in 6.304656274s

• [SLOW TEST:23.074 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:50:24.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Dec 21 12:50:24.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:25.115: INFO: stderr: ""
Dec 21 12:50:25.115: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 12:50:25.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:25.296: INFO: stderr: ""
Dec 21 12:50:25.296: INFO: stdout: "update-demo-nautilus-ft644 update-demo-nautilus-ftq2f "
Dec 21 12:50:25.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft644 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:25.460: INFO: stderr: ""
Dec 21 12:50:25.461: INFO: stdout: ""
Dec 21 12:50:25.461: INFO: update-demo-nautilus-ft644 is created but not running
Dec 21 12:50:30.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:30.664: INFO: stderr: ""
Dec 21 12:50:30.664: INFO: stdout: "update-demo-nautilus-ft644 update-demo-nautilus-ftq2f "
Dec 21 12:50:30.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft644 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:30.869: INFO: stderr: ""
Dec 21 12:50:30.869: INFO: stdout: ""
Dec 21 12:50:30.869: INFO: update-demo-nautilus-ft644 is created but not running
Dec 21 12:50:35.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:36.072: INFO: stderr: ""
Dec 21 12:50:36.072: INFO: stdout: "update-demo-nautilus-ft644 update-demo-nautilus-ftq2f "
Dec 21 12:50:36.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft644 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:36.280: INFO: stderr: ""
Dec 21 12:50:36.280: INFO: stdout: ""
Dec 21 12:50:36.280: INFO: update-demo-nautilus-ft644 is created but not running
Dec 21 12:50:41.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:41.465: INFO: stderr: ""
Dec 21 12:50:41.465: INFO: stdout: "update-demo-nautilus-ft644 update-demo-nautilus-ftq2f "
Dec 21 12:50:41.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft644 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:41.614: INFO: stderr: ""
Dec 21 12:50:41.614: INFO: stdout: "true"
Dec 21 12:50:41.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft644 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:41.756: INFO: stderr: ""
Dec 21 12:50:41.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 12:50:41.756: INFO: validating pod update-demo-nautilus-ft644
Dec 21 12:50:41.807: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 12:50:41.807: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 12:50:41.807: INFO: update-demo-nautilus-ft644 is verified up and running
Dec 21 12:50:41.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftq2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:42.005: INFO: stderr: ""
Dec 21 12:50:42.005: INFO: stdout: "true"
Dec 21 12:50:42.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftq2f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:50:42.153: INFO: stderr: ""
Dec 21 12:50:42.153: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 12:50:42.154: INFO: validating pod update-demo-nautilus-ftq2f
Dec 21 12:50:42.174: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 12:50:42.175: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 12:50:42.175: INFO: update-demo-nautilus-ftq2f is verified up and running
STEP: rolling-update to new replication controller
Dec 21 12:50:42.180: INFO: scanned /root for discovery docs: 
Dec 21 12:50:42.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:18.659: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 21 12:51:18.660: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 12:51:18.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:18.811: INFO: stderr: ""
Dec 21 12:51:18.811: INFO: stdout: "update-demo-kitten-4msws update-demo-kitten-t2ppl update-demo-nautilus-ftq2f "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 21 12:51:23.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:24.005: INFO: stderr: ""
Dec 21 12:51:24.005: INFO: stdout: "update-demo-kitten-4msws update-demo-kitten-t2ppl "
Dec 21 12:51:24.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4msws -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:24.230: INFO: stderr: ""
Dec 21 12:51:24.231: INFO: stdout: "true"
Dec 21 12:51:24.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4msws -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:24.355: INFO: stderr: ""
Dec 21 12:51:24.355: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 21 12:51:24.355: INFO: validating pod update-demo-kitten-4msws
Dec 21 12:51:24.408: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 21 12:51:24.409: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 21 12:51:24.409: INFO: update-demo-kitten-4msws is verified up and running
Dec 21 12:51:24.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t2ppl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:24.562: INFO: stderr: ""
Dec 21 12:51:24.562: INFO: stdout: "true"
Dec 21 12:51:24.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t2ppl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9x2ng'
Dec 21 12:51:24.736: INFO: stderr: ""
Dec 21 12:51:24.736: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 21 12:51:24.736: INFO: validating pod update-demo-kitten-t2ppl
Dec 21 12:51:24.754: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 21 12:51:24.754: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 21 12:51:24.754: INFO: update-demo-kitten-t2ppl is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:51:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9x2ng" for this suite.
Dec 21 12:51:50.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:51:50.901: INFO: namespace: e2e-tests-kubectl-9x2ng, resource: bindings, ignored listing per whitelist
Dec 21 12:51:51.047: INFO: namespace e2e-tests-kubectl-9x2ng deletion completed in 26.282904165s

• [SLOW TEST:86.754 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:51:51.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Dec 21 12:51:51.280: INFO: namespace e2e-tests-kubectl-wcstq
Dec 21 12:51:51.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wcstq'
Dec 21 12:51:51.643: INFO: stderr: ""
Dec 21 12:51:51.643: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 21 12:51:52.664: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:52.665: INFO: Found 0 / 1
Dec 21 12:51:53.659: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:53.660: INFO: Found 0 / 1
Dec 21 12:51:54.675: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:54.675: INFO: Found 0 / 1
Dec 21 12:51:55.659: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:55.659: INFO: Found 0 / 1
Dec 21 12:51:56.656: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:56.656: INFO: Found 0 / 1
Dec 21 12:51:58.110: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:58.110: INFO: Found 0 / 1
Dec 21 12:51:58.672: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:58.672: INFO: Found 0 / 1
Dec 21 12:51:59.677: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:51:59.677: INFO: Found 0 / 1
Dec 21 12:52:00.697: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:52:00.698: INFO: Found 0 / 1
Dec 21 12:52:01.670: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:52:01.670: INFO: Found 1 / 1
Dec 21 12:52:01.670: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 21 12:52:01.679: INFO: Selector matched 1 pods for map[app:redis]
Dec 21 12:52:01.679: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 21 12:52:01.679: INFO: wait on redis-master startup in e2e-tests-kubectl-wcstq 
Dec 21 12:52:01.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wjcpf redis-master --namespace=e2e-tests-kubectl-wcstq'
Dec 21 12:52:01.917: INFO: stderr: ""
Dec 21 12:52:01.917: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 21 Dec 12:52:00.385 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Dec 12:52:00.385 # Server started, Redis version 3.2.12\n1:M 21 Dec 12:52:00.385 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Dec 12:52:00.385 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 21 12:52:01.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-wcstq'
Dec 21 12:52:02.106: INFO: stderr: ""
Dec 21 12:52:02.106: INFO: stdout: "service/rm2 exposed\n"
Dec 21 12:52:02.114: INFO: Service rm2 in namespace e2e-tests-kubectl-wcstq found.
STEP: exposing service
Dec 21 12:52:04.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-wcstq'
Dec 21 12:52:04.409: INFO: stderr: ""
Dec 21 12:52:04.409: INFO: stdout: "service/rm3 exposed\n"
Dec 21 12:52:04.557: INFO: Service rm3 in namespace e2e-tests-kubectl-wcstq found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:52:06.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wcstq" for this suite.
Dec 21 12:52:32.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:52:32.811: INFO: namespace: e2e-tests-kubectl-wcstq, resource: bindings, ignored listing per whitelist
Dec 21 12:52:32.834: INFO: namespace e2e-tests-kubectl-wcstq deletion completed in 26.254023845s

• [SLOW TEST:41.787 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:52:32.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 12:52:33.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Dec 21 12:52:33.087: INFO: stderr: ""
Dec 21 12:52:33.087: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:42Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Dec 21 12:52:33.093: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:52:33.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x5pg7" for this suite.
Dec 21 12:52:39.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:52:39.386: INFO: namespace: e2e-tests-kubectl-x5pg7, resource: bindings, ignored listing per whitelist
Dec 21 12:52:39.477: INFO: namespace e2e-tests-kubectl-x5pg7 deletion completed in 6.330880453s

S [SKIPPING] [6.642 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Dec 21 12:52:33.093: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:52:39.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 21 12:52:39.745: INFO: Waiting up to 5m0s for pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-vv4qj" to be "success or failure"
Dec 21 12:52:39.793: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.914667ms
Dec 21 12:52:41.943: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197882874s
Dec 21 12:52:43.972: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22702811s
Dec 21 12:52:46.084: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338388732s
Dec 21 12:52:48.104: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358701861s
Dec 21 12:52:50.124: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.378720863s
Dec 21 12:52:52.147: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.401283833s
STEP: Saw pod success
Dec 21 12:52:52.147: INFO: Pod "pod-c5484f60-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:52:52.153: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c5484f60-23f0-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:52:52.412: INFO: Waiting for pod pod-c5484f60-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:52:52.483: INFO: Pod pod-c5484f60-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:52:52.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vv4qj" for this suite.
Dec 21 12:52:58.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:52:58.765: INFO: namespace: e2e-tests-emptydir-vv4qj, resource: bindings, ignored listing per whitelist
Dec 21 12:52:58.773: INFO: namespace e2e-tests-emptydir-vv4qj deletion completed in 6.2179197s

• [SLOW TEST:19.296 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:52:58.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-d0c19919-23f0-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:52:59.050: INFO: Waiting up to 5m0s for pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-8mlt4" to be "success or failure"
Dec 21 12:52:59.062: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.494821ms
Dec 21 12:53:01.113: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0636458s
Dec 21 12:53:03.132: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082526381s
Dec 21 12:53:05.880: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.830281858s
Dec 21 12:53:07.894: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.84465763s
Dec 21 12:53:09.925: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.875038973s
STEP: Saw pod success
Dec 21 12:53:09.925: INFO: Pod "pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:53:09.930: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 12:53:11.085: INFO: Waiting for pod pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:53:11.106: INFO: Pod pod-secrets-d0c2b42b-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:53:11.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8mlt4" for this suite.
Dec 21 12:53:19.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:53:19.195: INFO: namespace: e2e-tests-secrets-8mlt4, resource: bindings, ignored listing per whitelist
Dec 21 12:53:19.715: INFO: namespace e2e-tests-secrets-8mlt4 deletion completed in 8.600938629s

• [SLOW TEST:20.941 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:53:19.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Dec 21 12:53:19.959: INFO: Waiting up to 5m0s for pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005" in namespace "e2e-tests-containers-qmthc" to be "success or failure"
Dec 21 12:53:20.055: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 95.254448ms
Dec 21 12:53:22.076: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116381928s
Dec 21 12:53:24.099: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139643481s
Dec 21 12:53:26.743: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.783771133s
Dec 21 12:53:28.804: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.844164533s
Dec 21 12:53:30.834: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.874747579s
STEP: Saw pod success
Dec 21 12:53:30.834: INFO: Pod "client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:53:30.841: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:53:31.060: INFO: Waiting for pod client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:53:31.078: INFO: Pod client-containers-dd40e905-23f0-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:53:31.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-qmthc" for this suite.
Dec 21 12:53:37.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:53:37.275: INFO: namespace: e2e-tests-containers-qmthc, resource: bindings, ignored listing per whitelist
Dec 21 12:53:37.340: INFO: namespace e2e-tests-containers-qmthc deletion completed in 6.257250865s

• [SLOW TEST:17.625 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:53:37.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w2jfk
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Dec 21 12:53:37.754: INFO: Found 0 stateful pods, waiting for 3
Dec 21 12:53:47.776: INFO: Found 1 stateful pods, waiting for 3
Dec 21 12:53:57.770: INFO: Found 2 stateful pods, waiting for 3
Dec 21 12:54:07.892: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:54:07.893: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:54:07.893: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 21 12:54:17.777: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:54:17.778: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:54:17.778: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 21 12:54:17.877: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 21 12:54:28.087: INFO: Updating stateful set ss2
Dec 21 12:54:28.117: INFO: Waiting for Pod e2e-tests-statefulset-w2jfk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 12:54:38.146: INFO: Waiting for Pod e2e-tests-statefulset-w2jfk/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 21 12:54:54.287: INFO: Found 2 stateful pods, waiting for 3
Dec 21 12:55:04.639: INFO: Found 2 stateful pods, waiting for 3
Dec 21 12:55:14.342: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:55:14.343: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:55:14.343: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 21 12:55:24.319: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:55:24.319: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:55:24.319: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 21 12:55:34.325: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:55:34.325: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 21 12:55:34.325: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 21 12:55:34.503: INFO: Updating stateful set ss2
Dec 21 12:55:34.520: INFO: Waiting for Pod e2e-tests-statefulset-w2jfk/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 12:55:45.757: INFO: Updating stateful set ss2
Dec 21 12:55:45.979: INFO: Waiting for StatefulSet e2e-tests-statefulset-w2jfk/ss2 to complete update
Dec 21 12:55:45.979: INFO: Waiting for Pod e2e-tests-statefulset-w2jfk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 12:55:56.872: INFO: Waiting for StatefulSet e2e-tests-statefulset-w2jfk/ss2 to complete update
Dec 21 12:55:56.872: INFO: Waiting for Pod e2e-tests-statefulset-w2jfk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 12:56:06.007: INFO: Waiting for StatefulSet e2e-tests-statefulset-w2jfk/ss2 to complete update
Dec 21 12:56:06.007: INFO: Waiting for Pod e2e-tests-statefulset-w2jfk/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 21 12:56:16.882: INFO: Waiting for StatefulSet e2e-tests-statefulset-w2jfk/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Dec 21 12:56:26.000: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w2jfk
Dec 21 12:56:26.006: INFO: Scaling statefulset ss2 to 0
Dec 21 12:56:56.048: INFO: Waiting for statefulset status.replicas updated to 0
Dec 21 12:56:56.054: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:56:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w2jfk" for this suite.
Dec 21 12:57:04.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:57:04.383: INFO: namespace: e2e-tests-statefulset-w2jfk, resource: bindings, ignored listing per whitelist
Dec 21 12:57:04.391: INFO: namespace e2e-tests-statefulset-w2jfk deletion completed in 8.266519304s

• [SLOW TEST:207.051 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:57:04.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Dec 21 12:57:04.648: INFO: Waiting up to 5m0s for pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005" in namespace "e2e-tests-containers-lgz6r" to be "success or failure"
Dec 21 12:57:04.653: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.346657ms
Dec 21 12:57:06.662: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014298821s
Dec 21 12:57:08.729: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081475732s
Dec 21 12:57:10.759: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110612338s
Dec 21 12:57:12.791: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142939567s
Dec 21 12:57:14.821: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172747773s
STEP: Saw pod success
Dec 21 12:57:14.821: INFO: Pod "client-containers-632f6872-23f1-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:57:14.830: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-632f6872-23f1-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 12:57:14.965: INFO: Waiting for pod client-containers-632f6872-23f1-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:57:14.976: INFO: Pod client-containers-632f6872-23f1-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:57:14.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-lgz6r" for this suite.
Dec 21 12:57:21.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:57:21.229: INFO: namespace: e2e-tests-containers-lgz6r, resource: bindings, ignored listing per whitelist
Dec 21 12:57:21.326: INFO: namespace e2e-tests-containers-lgz6r deletion completed in 6.340571794s

• [SLOW TEST:16.934 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:57:21.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6d49f886-23f1-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 12:57:21.622: INFO: Waiting up to 5m0s for pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-8plxm" to be "success or failure"
Dec 21 12:57:21.634: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.945884ms
Dec 21 12:57:23.928: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305957992s
Dec 21 12:57:25.942: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319418451s
Dec 21 12:57:28.314: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.692064874s
Dec 21 12:57:30.325: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702728777s
Dec 21 12:57:32.364: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.742075178s
STEP: Saw pod success
Dec 21 12:57:32.365: INFO: Pod "pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:57:32.396: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Dec 21 12:57:32.653: INFO: Waiting for pod pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:57:32.750: INFO: Pod pod-secrets-6d4b5700-23f1-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:57:32.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8plxm" for this suite.
Dec 21 12:57:40.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:57:41.036: INFO: namespace: e2e-tests-secrets-8plxm, resource: bindings, ignored listing per whitelist
Dec 21 12:57:41.107: INFO: namespace e2e-tests-secrets-8plxm deletion completed in 8.321027543s

• [SLOW TEST:19.781 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:57:41.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Dec 21 12:57:52.068: INFO: Successfully updated pod "labelsupdate790559ee-23f1-11ea-bbd3-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:57:54.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zhgs2" for this suite.
Dec 21 12:58:18.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:58:18.368: INFO: namespace: e2e-tests-projected-zhgs2, resource: bindings, ignored listing per whitelist
Dec 21 12:58:18.388: INFO: namespace e2e-tests-projected-zhgs2 deletion completed in 24.218000351s

• [SLOW TEST:37.281 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:58:18.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Dec 21 12:58:31.029: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:59:05.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-58jw6" for this suite.
Dec 21 12:59:11.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:59:11.765: INFO: namespace: e2e-tests-namespaces-58jw6, resource: bindings, ignored listing per whitelist
Dec 21 12:59:11.907: INFO: namespace e2e-tests-namespaces-58jw6 deletion completed in 6.441300914s
STEP: Destroying namespace "e2e-tests-nsdeletetest-2rc86" for this suite.
Dec 21 12:59:11.912: INFO: Namespace e2e-tests-nsdeletetest-2rc86 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-5n887" for this suite.
Dec 21 12:59:19.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:59:19.982: INFO: namespace: e2e-tests-nsdeletetest-5n887, resource: bindings, ignored listing per whitelist
Dec 21 12:59:20.095: INFO: namespace e2e-tests-nsdeletetest-5n887 deletion completed in 8.183394029s

• [SLOW TEST:61.707 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:59:20.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 12:59:20.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-2b2td" to be "success or failure"
Dec 21 12:59:20.518: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 212.014562ms
Dec 21 12:59:22.685: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378728455s
Dec 21 12:59:24.694: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.387523498s
Dec 21 12:59:27.066: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759239484s
Dec 21 12:59:29.079: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.77287787s
Dec 21 12:59:31.290: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.983422617s
STEP: Saw pod success
Dec 21 12:59:31.290: INFO: Pod "downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:59:31.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 12:59:31.700: INFO: Waiting for pod downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:59:31.772: INFO: Pod downwardapi-volume-b409afa8-23f1-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:59:31.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-2b2td" for this suite.
Dec 21 12:59:39.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 12:59:40.083: INFO: namespace: e2e-tests-downward-api-2b2td, resource: bindings, ignored listing per whitelist
Dec 21 12:59:40.147: INFO: namespace e2e-tests-downward-api-2b2td deletion completed in 8.361569285s

• [SLOW TEST:20.052 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 12:59:40.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Dec 21 12:59:40.601: INFO: Waiting up to 5m0s for pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005" in namespace "e2e-tests-downward-api-s6z7j" to be "success or failure"
Dec 21 12:59:40.639: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 37.439698ms
Dec 21 12:59:42.916: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314346529s
Dec 21 12:59:45.035: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433652239s
Dec 21 12:59:47.085: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484053526s
Dec 21 12:59:50.680: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078514286s
Dec 21 12:59:52.703: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.101835217s
Dec 21 12:59:54.740: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.138331316s
Dec 21 12:59:56.769: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.167853916s
STEP: Saw pod success
Dec 21 12:59:56.769: INFO: Pod "downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 12:59:56.778: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 12:59:57.231: INFO: Waiting for pod downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005 to disappear
Dec 21 12:59:57.336: INFO: Pod downward-api-c014d1d9-23f1-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 12:59:57.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-s6z7j" for this suite.
Dec 21 13:00:05.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:00:05.650: INFO: namespace: e2e-tests-downward-api-s6z7j, resource: bindings, ignored listing per whitelist
Dec 21 13:00:06.323: INFO: namespace e2e-tests-downward-api-s6z7j deletion completed in 8.968907697s

• [SLOW TEST:26.176 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:00:06.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-cfed54af-23f1-11ea-bbd3-0242ac110005
STEP: Creating secret with name s-test-opt-upd-cfed5782-23f1-11ea-bbd3-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-cfed54af-23f1-11ea-bbd3-0242ac110005
STEP: Updating secret s-test-opt-upd-cfed5782-23f1-11ea-bbd3-0242ac110005
STEP: Creating secret with name s-test-opt-create-cfed5895-23f1-11ea-bbd3-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:01:59.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ldfxz" for this suite.
Dec 21 13:02:23.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:02:23.181: INFO: namespace: e2e-tests-projected-ldfxz, resource: bindings, ignored listing per whitelist
Dec 21 13:02:23.478: INFO: namespace e2e-tests-projected-ldfxz deletion completed in 24.4221735s

• [SLOW TEST:137.154 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:02:23.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-217e0f25-23f2-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 13:02:23.956: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-wpt55" to be "success or failure"
Dec 21 13:02:24.162: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 206.339589ms
Dec 21 13:02:26.181: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225138905s
Dec 21 13:02:28.194: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237555776s
Dec 21 13:02:31.416: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.459534884s
Dec 21 13:02:33.434: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.478311134s
Dec 21 13:02:35.456: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.500003729s
Dec 21 13:02:37.479: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.523023027s
STEP: Saw pod success
Dec 21 13:02:37.479: INFO: Pod "pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:02:37.485: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 13:02:38.531: INFO: Waiting for pod pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:02:38.914: INFO: Pod pod-projected-secrets-21817252-23f2-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:02:38.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wpt55" for this suite.
Dec 21 13:02:45.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:02:45.308: INFO: namespace: e2e-tests-projected-wpt55, resource: bindings, ignored listing per whitelist
Dec 21 13:02:45.345: INFO: namespace e2e-tests-projected-wpt55 deletion completed in 6.197423588s

• [SLOW TEST:21.867 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:02:45.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 21 13:02:45.712: INFO: Number of nodes with available pods: 0
Dec 21 13:02:45.712: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:46.731: INFO: Number of nodes with available pods: 0
Dec 21 13:02:46.731: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:47.976: INFO: Number of nodes with available pods: 0
Dec 21 13:02:47.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:48.739: INFO: Number of nodes with available pods: 0
Dec 21 13:02:48.739: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:49.766: INFO: Number of nodes with available pods: 0
Dec 21 13:02:49.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:50.729: INFO: Number of nodes with available pods: 0
Dec 21 13:02:50.729: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:51.769: INFO: Number of nodes with available pods: 0
Dec 21 13:02:51.770: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:52.781: INFO: Number of nodes with available pods: 0
Dec 21 13:02:52.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:53.745: INFO: Number of nodes with available pods: 0
Dec 21 13:02:53.745: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:54.747: INFO: Number of nodes with available pods: 0
Dec 21 13:02:54.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Dec 21 13:02:55.785: INFO: Number of nodes with available pods: 1
Dec 21 13:02:55.785: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 21 13:02:55.816: INFO: Number of nodes with available pods: 1
Dec 21 13:02:55.816: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-7rfjt, will wait for the garbage collector to delete the pods
Dec 21 13:02:56.922: INFO: Deleting DaemonSet.extensions daemon-set took: 16.285176ms
Dec 21 13:02:58.023: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.10114266s
Dec 21 13:02:59.436: INFO: Number of nodes with available pods: 0
Dec 21 13:02:59.436: INFO: Number of running nodes: 0, number of available pods: 0
Dec 21 13:02:59.440: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7rfjt/daemonsets","resourceVersion":"15572686"},"items":null}

Dec 21 13:02:59.443: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7rfjt/pods","resourceVersion":"15572686"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:02:59.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-7rfjt" for this suite.
Dec 21 13:03:05.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:03:05.618: INFO: namespace: e2e-tests-daemonsets-7rfjt, resource: bindings, ignored listing per whitelist
Dec 21 13:03:05.624: INFO: namespace e2e-tests-daemonsets-7rfjt deletion completed in 6.165735855s

• [SLOW TEST:20.279 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:03:05.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:03:05.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-zmmk8" to be "success or failure"
Dec 21 13:03:05.903: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 84.580864ms
Dec 21 13:03:08.280: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461536177s
Dec 21 13:03:10.311: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492698158s
Dec 21 13:03:12.867: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.049211794s
Dec 21 13:03:14.911: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.092644429s
Dec 21 13:03:16.927: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.108724998s
STEP: Saw pod success
Dec 21 13:03:16.927: INFO: Pod "downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:03:16.933: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 13:03:17.222: INFO: Waiting for pod downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:03:17.234: INFO: Pod downwardapi-volume-3a7554de-23f2-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:03:17.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zmmk8" for this suite.
Dec 21 13:03:23.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:03:23.426: INFO: namespace: e2e-tests-projected-zmmk8, resource: bindings, ignored listing per whitelist
Dec 21 13:03:23.513: INFO: namespace e2e-tests-projected-zmmk8 deletion completed in 6.27040205s

• [SLOW TEST:17.889 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:03:23.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 21 13:03:35.951: INFO: 10 pods remaining
Dec 21 13:03:35.951: INFO: 9 pods has nil DeletionTimestamp
Dec 21 13:03:35.951: INFO: 
Dec 21 13:03:38.092: INFO: 0 pods remaining
Dec 21 13:03:38.092: INFO: 0 pods has nil DeletionTimestamp
Dec 21 13:03:38.092: INFO: 
Dec 21 13:03:39.215: INFO: 0 pods remaining
Dec 21 13:03:39.215: INFO: 0 pods has nil DeletionTimestamp
Dec 21 13:03:39.215: INFO: 
Dec 21 13:03:39.885: INFO: 0 pods remaining
Dec 21 13:03:39.885: INFO: 0 pods has nil DeletionTimestamp
Dec 21 13:03:39.885: INFO: 
STEP: Gathering metrics
W1221 13:03:40.740181       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 13:03:40.740: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:03:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bl2ml" for this suite.
Dec 21 13:03:58.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:03:59.303: INFO: namespace: e2e-tests-gc-bl2ml, resource: bindings, ignored listing per whitelist
Dec 21 13:03:59.344: INFO: namespace e2e-tests-gc-bl2ml deletion completed in 18.587335103s

• [SLOW TEST:35.831 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:03:59.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Dec 21 13:03:59.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-h8bl7" to be "success or failure"
Dec 21 13:03:59.928: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.018724ms
Dec 21 13:04:02.254: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.341515346s
Dec 21 13:04:04.297: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384928189s
Dec 21 13:04:06.443: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.530354638s
Dec 21 13:04:08.458: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546109581s
Dec 21 13:04:10.503: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.590304073s
Dec 21 13:04:12.566: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.653772715s
STEP: Saw pod success
Dec 21 13:04:12.566: INFO: Pod "downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:04:12.580: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005 container client-container: 
STEP: delete the pod
Dec 21 13:04:13.084: INFO: Waiting for pod downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:04:13.113: INFO: Pod downwardapi-volume-5a99aa88-23f2-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:04:13.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h8bl7" for this suite.
Dec 21 13:04:21.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:04:21.465: INFO: namespace: e2e-tests-projected-h8bl7, resource: bindings, ignored listing per whitelist
Dec 21 13:04:21.538: INFO: namespace e2e-tests-projected-h8bl7 deletion completed in 8.282379969s

• [SLOW TEST:22.192 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:04:21.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-67c2a783-23f2-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 13:04:21.910: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-g9thg" to be "success or failure"
Dec 21 13:04:21.919: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.377128ms
Dec 21 13:04:24.140: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229311028s
Dec 21 13:04:26.158: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247682806s
Dec 21 13:04:28.380: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470113014s
Dec 21 13:04:30.442: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5315931s
Dec 21 13:04:32.475: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.564388588s
STEP: Saw pod success
Dec 21 13:04:32.475: INFO: Pod "pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:04:32.501: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 13:04:32.696: INFO: Waiting for pod pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:04:32.710: INFO: Pod pod-projected-secrets-67ce2601-23f2-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:04:32.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g9thg" for this suite.
Dec 21 13:04:38.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:04:38.995: INFO: namespace: e2e-tests-projected-g9thg, resource: bindings, ignored listing per whitelist
Dec 21 13:04:39.021: INFO: namespace e2e-tests-projected-g9thg deletion completed in 6.296933718s

• [SLOW TEST:17.482 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:04:39.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 13:04:39.501: INFO: Creating deployment "nginx-deployment"
Dec 21 13:04:39.532: INFO: Waiting for observed generation 1
Dec 21 13:04:42.531: INFO: Waiting for all required pods to come up
Dec 21 13:04:43.946: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 21 13:05:31.382: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 21 13:05:31.427: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 21 13:05:31.527: INFO: Updating deployment nginx-deployment
Dec 21 13:05:31.527: INFO: Waiting for observed generation 2
Dec 21 13:05:34.945: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 21 13:05:36.187: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 21 13:05:36.707: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 21 13:05:37.388: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 21 13:05:37.388: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 21 13:05:37.392: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 21 13:05:37.408: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 21 13:05:37.408: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 21 13:05:38.962: INFO: Updating deployment nginx-deployment
Dec 21 13:05:38.963: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 21 13:05:40.129: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 21 13:05:44.251: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 21 13:05:45.647: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bvv8c/deployments/nginx-deployment,UID:724fd877-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573282,Generation:3,CreationTimestamp:2019-12-21 13:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-21 13:05:33 +0000 UTC 2019-12-21 13:04:39 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2019-12-21 13:05:41 +0000 UTC 2019-12-21 13:05:41 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 21 13:05:47.032: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bvv8c/replicasets/nginx-deployment-5c98f8fb5,UID:914db8b0-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573288,Generation:3,CreationTimestamp:2019-12-21 13:05:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 724fd877-23f2-11ea-a994-fa163e34d433 0xc0027a0937 0xc0027a0938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:05:47.032: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 21 13:05:47.033: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bvv8c/replicasets/nginx-deployment-85ddf47c5d,UID:7256639d-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573280,Generation:3,CreationTimestamp:2019-12-21 13:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 724fd877-23f2-11ea-a994-fa163e34d433 0xc0027a09f7 0xc0027a09f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 21 13:05:48.066: INFO: Pod "nginx-deployment-5c98f8fb5-6hdzv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6hdzv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-6hdzv,UID:98e9dba4-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573283,Generation:0,CreationTimestamp:2019-12-21 13:05:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc0029369e7 0xc0029369e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002936a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002936a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.066: INFO: Pod "nginx-deployment-5c98f8fb5-cf82w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cf82w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-cf82w,UID:986dd020-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573279,Generation:0,CreationTimestamp:2019-12-21 13:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002936ae7 0xc002936ae8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002936b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002936b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.067: INFO: Pod "nginx-deployment-5c98f8fb5-d2rx5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-d2rx5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-d2rx5,UID:97bee79b-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573266,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002936c67 0xc002936c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002936cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002936cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.067: INFO: Pod "nginx-deployment-5c98f8fb5-dgw27" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dgw27,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-dgw27,UID:98680edf-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573278,Generation:0,CreationTimestamp:2019-12-21 13:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002936d67 0xc002936d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002936dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002936df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.068: INFO: Pod "nginx-deployment-5c98f8fb5-hf2xd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hf2xd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-hf2xd,UID:91e45b59-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573216,Generation:0,CreationTimestamp:2019-12-21 13:05:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002936e67 0xc002936e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002936ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002936f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.068: INFO: Pod "nginx-deployment-5c98f8fb5-jq4h7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jq4h7,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-jq4h7,UID:986c83d7-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573276,Generation:0,CreationTimestamp:2019-12-21 13:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937037 0xc002937038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029370a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029370c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.069: INFO: Pod "nginx-deployment-5c98f8fb5-mzcwx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mzcwx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-mzcwx,UID:91712701-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573203,Generation:0,CreationTimestamp:2019-12-21 13:05:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937137 0xc002937138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029371a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029371c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.069: INFO: Pod "nginx-deployment-5c98f8fb5-ptlvv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ptlvv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-ptlvv,UID:9745111b-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573246,Generation:0,CreationTimestamp:2019-12-21 13:05:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937587 0xc002937588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029375f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.070: INFO: Pod "nginx-deployment-5c98f8fb5-q7r7t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-q7r7t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-q7r7t,UID:97be5cde-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573265,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937687 0xc002937688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029376f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.070: INFO: Pod "nginx-deployment-5c98f8fb5-rb8ws" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rb8ws,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-rb8ws,UID:9869230c-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573273,Generation:0,CreationTimestamp:2019-12-21 13:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc0029377c7 0xc0029377c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002937830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:44 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.071: INFO: Pod "nginx-deployment-5c98f8fb5-tn9nb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tn9nb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-tn9nb,UID:9201de82-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573241,Generation:0,CreationTimestamp:2019-12-21 13:05:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937907 0xc002937908}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002937970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.072: INFO: Pod "nginx-deployment-5c98f8fb5-tw2fr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tw2fr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-tw2fr,UID:917315c0-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573209,Generation:0,CreationTimestamp:2019-12-21 13:05:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937a57 0xc002937a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002937af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.072: INFO: Pod "nginx-deployment-5c98f8fb5-twhhl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-twhhl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-5c98f8fb5-twhhl,UID:916d164c-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573184,Generation:0,CreationTimestamp:2019-12-21 13:05:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 914db8b0-23f2-11ea-a994-fa163e34d433 0xc002937bd7 0xc002937bd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002937cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:31 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.073: INFO: Pod "nginx-deployment-85ddf47c5d-4q2wk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4q2wk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-4q2wk,UID:72adefd1-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573156,Generation:0,CreationTimestamp:2019-12-21 13:04:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002937d97 0xc002937d98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002937e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002937eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2019-12-21 13:04:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://12cf8c748e3d6ffb479483450b8cd316c4fe9a90819928be90ae975c97e6e7ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.073: INFO: Pod "nginx-deployment-85ddf47c5d-6h456" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6h456,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-6h456,UID:974462d0-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573253,Generation:0,CreationTimestamp:2019-12-21 13:05:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002937f77 0xc002937f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002937fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002798010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.074: INFO: Pod "nginx-deployment-85ddf47c5d-6s59l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6s59l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-6s59l,UID:727bb9f9-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573094,Generation:0,CreationTimestamp:2019-12-21 13:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798087 0xc002798088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027980f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002798110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2019-12-21 13:04:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://569a3d317b523ebd655ada191a9d5fb42b4c5babe534fc37d23f36d858588c63}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.075: INFO: Pod "nginx-deployment-85ddf47c5d-6ttkg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6ttkg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-6ttkg,UID:97420e0a-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573247,Generation:0,CreationTimestamp:2019-12-21 13:05:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc0027981d7 0xc0027981d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002798390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027983b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.075: INFO: Pod "nginx-deployment-85ddf47c5d-7r2wv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7r2wv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-7r2wv,UID:97bf8a09-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573269,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798427 0xc002798428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002798490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027984b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.076: INFO: Pod "nginx-deployment-85ddf47c5d-9sc94" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9sc94,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-9sc94,UID:97bf73b0-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573272,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798687 0xc002798688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002798980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027989a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.077: INFO: Pod "nginx-deployment-85ddf47c5d-fkkmq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fkkmq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-fkkmq,UID:96721fe7-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573267,Generation:0,CreationTimestamp:2019-12-21 13:05:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798b77 0xc002798b78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002798c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002798cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.077: INFO: Pod "nginx-deployment-85ddf47c5d-g2xht" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-g2xht,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-g2xht,UID:9743f934-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573242,Generation:0,CreationTimestamp:2019-12-21 13:05:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798d67 0xc002798d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002798dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002798df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.078: INFO: Pod "nginx-deployment-85ddf47c5d-h56zv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h56zv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-h56zv,UID:97bf2a33-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573270,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798ee7 0xc002798ee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002798f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002798f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.078: INFO: Pod "nginx-deployment-85ddf47c5d-hkzsv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hkzsv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-hkzsv,UID:728436fd-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573141,Generation:0,CreationTimestamp:2019-12-21 13:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002798fe7 0xc002798fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-21 13:04:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:21 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0ec3a3d959d8f92e4e4e19e74f686e9a7c071db5f1b8338388c573abbac8b510}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.079: INFO: Pod "nginx-deployment-85ddf47c5d-l9wn4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l9wn4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-l9wn4,UID:72e39258-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573137,Generation:0,CreationTimestamp:2019-12-21 13:04:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002799137 0xc002799138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027991a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027991c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2019-12-21 13:04:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7ff52039c9ed28d3328f3336749114f3da2e5939e52c8365950361c971299ba7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.079: INFO: Pod "nginx-deployment-85ddf47c5d-mrdh8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mrdh8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-mrdh8,UID:97bface3-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573268,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc0027992f7 0xc0027992f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.080: INFO: Pod "nginx-deployment-85ddf47c5d-pptfl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pptfl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-pptfl,UID:9744cbc0-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573250,Generation:0,CreationTimestamp:2019-12-21 13:05:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc0027993f7 0xc0027993f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.080: INFO: Pod "nginx-deployment-85ddf47c5d-rt87r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rt87r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-rt87r,UID:97bf5489-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573271,Generation:0,CreationTimestamp:2019-12-21 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc0027994f7 0xc0027994f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.080: INFO: Pod "nginx-deployment-85ddf47c5d-sg9tg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sg9tg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-sg9tg,UID:7284b83c-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573128,Generation:0,CreationTimestamp:2019-12-21 13:04:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002799687 0xc002799688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027996f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2019-12-21 13:04:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c5b303995f5b0cc73decc31e6f57a14c9afe3ef6a26c68746dcb2829e294cc8c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.081: INFO: Pod "nginx-deployment-85ddf47c5d-t2kcn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t2kcn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-t2kcn,UID:72aef636-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573145,Generation:0,CreationTimestamp:2019-12-21 13:04:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc0027997d7 0xc0027997d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027999a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2019-12-21 13:04:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4f9cb5036a0eb190a13228dc98859f81c4d486f898e7107c4aebce3718c82df6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.081: INFO: Pod "nginx-deployment-85ddf47c5d-thktz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-thktz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-thktz,UID:72adfdb6-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573131,Generation:0,CreationTimestamp:2019-12-21 13:04:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002799a67 0xc002799a68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2019-12-21 13:04:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6eee64e236c8e186893b57dd40c731c4fcac077e2b19cff40793be3f283bd2da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.082: INFO: Pod "nginx-deployment-85ddf47c5d-xsnxz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xsnxz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-xsnxz,UID:9677bbfa-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573297,Generation:0,CreationTimestamp:2019-12-21 13:05:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002799c47 0xc002799c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.082: INFO: Pod "nginx-deployment-85ddf47c5d-xtpdq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xtpdq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-xtpdq,UID:9677c260-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573287,Generation:0,CreationTimestamp:2019-12-21 13:05:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002799df7 0xc002799df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002799e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002799e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2019-12-21 13:05:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 21 13:05:48.082: INFO: Pod "nginx-deployment-85ddf47c5d-zcqdv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zcqdv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-bvv8c,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bvv8c/pods/nginx-deployment-85ddf47c5d-zcqdv,UID:72e389b6-23f2-11ea-a994-fa163e34d433,ResourceVersion:15573133,Generation:0,CreationTimestamp:2019-12-21 13:04:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 7256639d-23f2-11ea-a994-fa163e34d433 0xc002799fa7 0xc002799fa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-94cbw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-94cbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-94cbw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002628010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002628030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:05:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:04:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2019-12-21 13:04:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-21 13:05:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://383e9fedcd1543206e1edaebc3f5e1323a392afe5b33bae5523e1d50c58dbccb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:05:48.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-bvv8c" for this suite.
Dec 21 13:07:19.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:07:20.881: INFO: namespace: e2e-tests-deployment-bvv8c, resource: bindings, ignored listing per whitelist
Dec 21 13:07:20.925: INFO: namespace e2e-tests-deployment-bvv8c deletion completed in 1m31.462370424s

• [SLOW TEST:161.904 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:07:20.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w4s6n;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w4s6n;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w4s6n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 155.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.155_udp@PTR;check="$$(dig +tcp +noall +answer +search 155.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.155_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w4s6n;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-w4s6n.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-w4s6n.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 155.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.155_udp@PTR;check="$$(dig +tcp +noall +answer +search 155.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.155_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 21 13:08:06.736: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.895: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.900: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.913: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.924: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.930: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.934: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.938: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.943: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.947: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.950: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.952: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:06.989: INFO: Lookups using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 failed for: [wheezy_udp@dns-test-service jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4s6n jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-w4s6n.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-w4s6n.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 21 13:08:12.244: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:12.253: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:12.297: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:12.309: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:12.319: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:12.328: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:12.398: INFO: Lookups using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4s6n jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc]

Dec 21 13:08:18.154: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:18.160: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:18.163: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:18.167: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:18.170: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:18.173: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:18.203: INFO: Lookups using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4s6n jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc]

Dec 21 13:08:22.708: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:22.747: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:22.790: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:22.822: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:22.857: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:22.880: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:23.125: INFO: Lookups using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4s6n jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc]

Dec 21 13:08:27.498: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:27.521: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:27.565: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:27.583: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:27.594: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:27.605: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:27.684: INFO: Lookups using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4s6n jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc]

Dec 21 13:08:32.521: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:32.527: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:32.543: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:32.557: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:32.581: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:32.598: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc from pod e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005: the server could not find the requested resource (get pods dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005)
Dec 21 13:08:32.727: INFO: Lookups using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-w4s6n jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n jessie_udp@dns-test-service.e2e-tests-dns-w4s6n.svc jessie_tcp@dns-test-service.e2e-tests-dns-w4s6n.svc]

Dec 21 13:08:37.382: INFO: DNS probes using e2e-tests-dns-w4s6n/dns-test-d4367f49-23f2-11ea-bbd3-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:08:38.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-w4s6n" for this suite.
Dec 21 13:08:47.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:08:47.297: INFO: namespace: e2e-tests-dns-w4s6n, resource: bindings, ignored listing per whitelist
Dec 21 13:08:47.375: INFO: namespace e2e-tests-dns-w4s6n deletion completed in 8.558871621s

• [SLOW TEST:86.449 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:08:47.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-rztsv/secret-test-062f887c-23f3-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 13:08:47.792: INFO: Waiting up to 5m0s for pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-secrets-rztsv" to be "success or failure"
Dec 21 13:08:47.809: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.168408ms
Dec 21 13:08:50.052: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259949658s
Dec 21 13:08:52.078: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286029949s
Dec 21 13:08:54.086: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.294381796s
Dec 21 13:08:56.897: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.105093823s
Dec 21 13:08:59.280: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.488301662s
Dec 21 13:09:01.362: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.570489044s
Dec 21 13:09:03.383: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.591375211s
STEP: Saw pod success
Dec 21 13:09:03.383: INFO: Pod "pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:09:03.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005 container env-test: 
STEP: delete the pod
Dec 21 13:09:03.615: INFO: Waiting for pod pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:09:03.712: INFO: Pod pod-configmaps-06487f67-23f3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:09:03.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-rztsv" for this suite.
Dec 21 13:09:09.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:09:10.014: INFO: namespace: e2e-tests-secrets-rztsv, resource: bindings, ignored listing per whitelist
Dec 21 13:09:10.056: INFO: namespace e2e-tests-secrets-rztsv deletion completed in 6.336171499s

• [SLOW TEST:22.681 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:09:10.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Dec 21 13:09:10.275: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix097017036/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:09:10.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jq4k4" for this suite.
Dec 21 13:09:16.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:09:16.715: INFO: namespace: e2e-tests-kubectl-jq4k4, resource: bindings, ignored listing per whitelist
Dec 21 13:09:16.840: INFO: namespace e2e-tests-kubectl-jq4k4 deletion completed in 6.461569303s

• [SLOW TEST:6.783 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:09:16.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Dec 21 13:09:17.065: INFO: Waiting up to 5m0s for pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-var-expansion-df9xd" to be "success or failure"
Dec 21 13:09:17.157: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 91.328164ms
Dec 21 13:09:19.497: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431178851s
Dec 21 13:09:21.519: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453433147s
Dec 21 13:09:24.661: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.595675364s
Dec 21 13:09:26.700: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.634942569s
Dec 21 13:09:28.804: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.738516472s
Dec 21 13:09:30.818: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.752481726s
STEP: Saw pod success
Dec 21 13:09:30.818: INFO: Pod "var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:09:30.856: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005 container dapi-container: 
STEP: delete the pod
Dec 21 13:09:31.967: INFO: Waiting for pod var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:09:32.042: INFO: Pod var-expansion-17b4a0b6-23f3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:09:32.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-df9xd" for this suite.
Dec 21 13:09:38.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:09:38.343: INFO: namespace: e2e-tests-var-expansion-df9xd, resource: bindings, ignored listing per whitelist
Dec 21 13:09:38.417: INFO: namespace e2e-tests-var-expansion-df9xd deletion completed in 6.205512707s

• [SLOW TEST:21.577 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:09:38.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:10:39.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-v6hr4" for this suite.
Dec 21 13:10:47.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:10:47.340: INFO: namespace: e2e-tests-container-runtime-v6hr4, resource: bindings, ignored listing per whitelist
Dec 21 13:10:47.418: INFO: namespace e2e-tests-container-runtime-v6hr4 deletion completed in 8.389802955s

• [SLOW TEST:69.001 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:10:47.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Dec 21 13:10:47.809: INFO: Waiting up to 5m0s for pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-containers-pzjrc" to be "success or failure"
Dec 21 13:10:47.833: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.239205ms
Dec 21 13:10:50.041: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231722088s
Dec 21 13:10:52.062: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252594943s
Dec 21 13:10:54.121: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311811297s
Dec 21 13:10:56.423: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.613479118s
Dec 21 13:10:58.459: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649363939s
Dec 21 13:11:00.480: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.670729377s
STEP: Saw pod success
Dec 21 13:11:00.481: INFO: Pod "client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:11:00.497: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 13:11:01.401: INFO: Waiting for pod client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:11:01.448: INFO: Pod client-containers-4dbb7c52-23f3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:11:01.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-pzjrc" for this suite.
Dec 21 13:11:07.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:11:07.836: INFO: namespace: e2e-tests-containers-pzjrc, resource: bindings, ignored listing per whitelist
Dec 21 13:11:07.914: INFO: namespace e2e-tests-containers-pzjrc deletion completed in 6.263228382s

• [SLOW TEST:20.495 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:11:07.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 13:11:08.291: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 21 13:11:13.321: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 21 13:11:21.351: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 21 13:11:23.373: INFO: Creating deployment "test-rollover-deployment"
Dec 21 13:11:23.447: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 21 13:11:25.895: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 21 13:11:26.447: INFO: Ensure that both replica sets have 1 created replica
Dec 21 13:11:26.552: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 21 13:11:27.199: INFO: Updating deployment test-rollover-deployment
Dec 21 13:11:27.199: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 21 13:11:29.356: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 21 13:11:29.378: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 21 13:11:29.398: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:29.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:31.439: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:31.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:33.431: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:33.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:35.633: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:35.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:37.423: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:37.423: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:39.423: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:39.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530688, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:41.433: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:41.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530699, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:43.469: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:43.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530699, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:45.435: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:45.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530699, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:47.453: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:47.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530699, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:49.428: INFO: all replica sets need to contain the pod-template-hash label
Dec 21 13:11:49.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530699, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712530683, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 21 13:11:51.816: INFO: 
Dec 21 13:11:51.816: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Dec 21 13:11:51.839: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-zxb9g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zxb9g/deployments/test-rollover-deployment,UID:630cc2e0-23f3-11ea-a994-fa163e34d433,ResourceVersion:15574135,Generation:2,CreationTimestamp:2019-12-21 13:11:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-21 13:11:23 +0000 UTC 2019-12-21 13:11:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-21 13:11:50 +0000 UTC 2019-12-21 13:11:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 21 13:11:51.848: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-zxb9g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zxb9g/replicasets/test-rollover-deployment-5b8479fdb6,UID:654fd0dd-23f3-11ea-a994-fa163e34d433,ResourceVersion:15574125,Generation:2,CreationTimestamp:2019-12-21 13:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 630cc2e0-23f3-11ea-a994-fa163e34d433 0xc001384077 0xc001384078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 21 13:11:51.848: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 21 13:11:51.848: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-zxb9g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zxb9g/replicasets/test-rollover-controller,UID:59f5e1c2-23f3-11ea-a994-fa163e34d433,ResourceVersion:15574134,Generation:2,CreationTimestamp:2019-12-21 13:11:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 630cc2e0-23f3-11ea-a994-fa163e34d433 0xc0016f8e27 0xc0016f8e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:11:51.849: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-zxb9g,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-zxb9g/replicasets/test-rollover-deployment-58494b7559,UID:6320594d-23f3-11ea-a994-fa163e34d433,ResourceVersion:15574091,Generation:2,CreationTimestamp:2019-12-21 13:11:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 630cc2e0-23f3-11ea-a994-fa163e34d433 0xc0016f9d87 0xc0016f9d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 21 13:11:51.872: INFO: Pod "test-rollover-deployment-5b8479fdb6-htrwl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-htrwl,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-zxb9g,SelfLink:/api/v1/namespaces/e2e-tests-deployment-zxb9g/pods/test-rollover-deployment-5b8479fdb6-htrwl,UID:65c903c1-23f3-11ea-a994-fa163e34d433,ResourceVersion:15574111,Generation:0,CreationTimestamp:2019-12-21 13:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 654fd0dd-23f3-11ea-a994-fa163e34d433 0xc000a12937 0xc000a12938}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pk8bp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pk8bp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-pk8bp true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000a129c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000a129e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:11:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:11:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:11:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-21 13:11:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2019-12-21 13:11:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-21 13:11:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://43fda54e643922b9d020250c322f92c125471b79e4a45b98e2424c289a879a4b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:11:51.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-zxb9g" for this suite.
Dec 21 13:12:00.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:12:00.991: INFO: namespace: e2e-tests-deployment-zxb9g, resource: bindings, ignored listing per whitelist
Dec 21 13:12:01.008: INFO: namespace e2e-tests-deployment-zxb9g deletion completed in 9.12072509s

• [SLOW TEST:53.094 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:12:01.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Dec 21 13:12:16.103: INFO: Waiting up to 5m0s for pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-pods-v8llj" to be "success or failure"
Dec 21 13:12:16.159: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.974643ms
Dec 21 13:12:18.179: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076634342s
Dec 21 13:12:20.195: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092503579s
Dec 21 13:12:22.228: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12525751s
Dec 21 13:12:24.838: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.735344446s
Dec 21 13:12:26.857: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.754512289s
Dec 21 13:12:28.888: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.785479946s
Dec 21 13:12:30.910: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.807117913s
STEP: Saw pod success
Dec 21 13:12:30.910: INFO: Pod "client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:12:30.915: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005 container env3cont: 
STEP: delete the pod
Dec 21 13:12:31.087: INFO: Waiting for pod client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:12:31.109: INFO: Pod client-envvars-82585df6-23f3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:12:31.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-v8llj" for this suite.
Dec 21 13:13:29.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:13:29.476: INFO: namespace: e2e-tests-pods-v8llj, resource: bindings, ignored listing per whitelist
Dec 21 13:13:29.500: INFO: namespace e2e-tests-pods-v8llj deletion completed in 58.376978488s

• [SLOW TEST:88.491 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:13:29.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 13:13:29.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-v8nfh'
Dec 21 13:13:33.495: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 13:13:33.495: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 21 13:13:33.666: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 21 13:13:33.942: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 21 13:13:33.998: INFO: scanned /root for discovery docs: 
Dec 21 13:13:33.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-v8nfh'
Dec 21 13:14:05.078: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 21 13:14:05.078: INFO: stdout: "Created e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6\nScaling up e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 21 13:14:05.078: INFO: stdout: "Created e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6\nScaling up e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 21 13:14:05.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v8nfh'
Dec 21 13:14:05.256: INFO: stderr: ""
Dec 21 13:14:05.256: INFO: stdout: "e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6-ccl7n "
Dec 21 13:14:05.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6-ccl7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v8nfh'
Dec 21 13:14:05.441: INFO: stderr: ""
Dec 21 13:14:05.441: INFO: stdout: "true"
Dec 21 13:14:05.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6-ccl7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v8nfh'
Dec 21 13:14:05.642: INFO: stderr: ""
Dec 21 13:14:05.642: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 21 13:14:05.642: INFO: e2e-test-nginx-rc-50ae56364e505a018845f6b8b16d3ff6-ccl7n is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Dec 21 13:14:05.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-v8nfh'
Dec 21 13:14:05.798: INFO: stderr: ""
Dec 21 13:14:05.799: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:14:05.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v8nfh" for this suite.
Dec 21 13:14:29.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:14:30.056: INFO: namespace: e2e-tests-kubectl-v8nfh, resource: bindings, ignored listing per whitelist
Dec 21 13:14:30.075: INFO: namespace e2e-tests-kubectl-v8nfh deletion completed in 24.231519915s

• [SLOW TEST:60.575 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:14:30.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 13:14:30.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-69vth'
Dec 21 13:14:30.393: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 13:14:30.393: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Dec 21 13:14:34.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-69vth'
Dec 21 13:14:35.100: INFO: stderr: ""
Dec 21 13:14:35.100: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:14:35.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-69vth" for this suite.
Dec 21 13:14:41.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:14:41.484: INFO: namespace: e2e-tests-kubectl-69vth, resource: bindings, ignored listing per whitelist
Dec 21 13:14:41.496: INFO: namespace e2e-tests-kubectl-69vth deletion completed in 6.283630783s

• [SLOW TEST:11.419 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:14:41.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:14:55.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-qkwsw" for this suite.
Dec 21 13:15:02.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:15:02.331: INFO: namespace: e2e-tests-emptydir-wrapper-qkwsw, resource: bindings, ignored listing per whitelist
Dec 21 13:15:02.336: INFO: namespace e2e-tests-emptydir-wrapper-qkwsw deletion completed in 6.292262081s

• [SLOW TEST:20.839 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:15:02.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-h6tbf/configmap-test-e5c29274-23f3-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 13:15:02.882: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005" in namespace "e2e-tests-configmap-h6tbf" to be "success or failure"
Dec 21 13:15:02.907: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.990241ms
Dec 21 13:15:04.932: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050376271s
Dec 21 13:15:06.949: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066851002s
Dec 21 13:15:09.447: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565359592s
Dec 21 13:15:11.486: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604034458s
Dec 21 13:15:13.506: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.624515772s
STEP: Saw pod success
Dec 21 13:15:13.506: INFO: Pod "pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:15:13.515: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005 container env-test: 
STEP: delete the pod
Dec 21 13:15:13.686: INFO: Waiting for pod pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:15:13.691: INFO: Pod pod-configmaps-e5c912e6-23f3-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:15:13.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h6tbf" for this suite.
Dec 21 13:15:20.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:15:20.927: INFO: namespace: e2e-tests-configmap-h6tbf, resource: bindings, ignored listing per whitelist
Dec 21 13:15:20.991: INFO: namespace e2e-tests-configmap-h6tbf deletion completed in 7.135180012s

• [SLOW TEST:18.654 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:15:20.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-w2vzl
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 21 13:15:21.181: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 21 13:15:57.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-w2vzl PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 21 13:15:57.409: INFO: >>> kubeConfig: /root/.kube/config
Dec 21 13:15:57.970: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:15:57.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-w2vzl" for this suite.
Dec 21 13:16:26.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:16:26.189: INFO: namespace: e2e-tests-pod-network-test-w2vzl, resource: bindings, ignored listing per whitelist
Dec 21 13:16:26.210: INFO: namespace e2e-tests-pod-network-test-w2vzl deletion completed in 28.221473136s

• [SLOW TEST:65.219 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:16:26.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-17b9064e-23f4-11ea-bbd3-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:16:40.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-df2p8" for this suite.
Dec 21 13:17:04.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:17:04.850: INFO: namespace: e2e-tests-configmap-df2p8, resource: bindings, ignored listing per whitelist
Dec 21 13:17:04.999: INFO: namespace e2e-tests-configmap-df2p8 deletion completed in 24.216948887s

• [SLOW TEST:38.788 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:17:04.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 13:17:05.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-cl4nk'
Dec 21 13:17:05.412: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 13:17:05.412: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Dec 21 13:17:05.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-cl4nk'
Dec 21 13:17:05.598: INFO: stderr: ""
Dec 21 13:17:05.598: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:17:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-cl4nk" for this suite.
Dec 21 13:17:29.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:17:29.951: INFO: namespace: e2e-tests-kubectl-cl4nk, resource: bindings, ignored listing per whitelist
Dec 21 13:17:30.033: INFO: namespace e2e-tests-kubectl-cl4nk deletion completed in 24.421839473s

• [SLOW TEST:25.034 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:17:30.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-3dc9ed2a-23f4-11ea-bbd3-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-3dc9ed2a-23f4-11ea-bbd3-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:17:47.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zlxsz" for this suite.
Dec 21 13:18:11.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:18:11.167: INFO: namespace: e2e-tests-configmap-zlxsz, resource: bindings, ignored listing per whitelist
Dec 21 13:18:11.214: INFO: namespace e2e-tests-configmap-zlxsz deletion completed in 24.201305549s

• [SLOW TEST:41.181 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:18:11.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 21 13:18:11.531: INFO: Waiting up to 5m0s for pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005" in namespace "e2e-tests-emptydir-k77zb" to be "success or failure"
Dec 21 13:18:11.562: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.029832ms
Dec 21 13:18:13.736: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204940923s
Dec 21 13:18:15.789: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257671108s
Dec 21 13:18:18.619: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.087237765s
Dec 21 13:18:20.813: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.28179832s
Dec 21 13:18:22.941: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 11.409122737s
Dec 21 13:18:24.989: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 13.457851242s
Dec 21 13:18:27.134: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.602409061s
STEP: Saw pod success
Dec 21 13:18:27.134: INFO: Pod "pod-564d96eb-23f4-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:18:27.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-564d96eb-23f4-11ea-bbd3-0242ac110005 container test-container: 
STEP: delete the pod
Dec 21 13:18:27.293: INFO: Waiting for pod pod-564d96eb-23f4-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:18:27.323: INFO: Pod pod-564d96eb-23f4-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:18:27.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-k77zb" for this suite.
Dec 21 13:18:35.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:18:35.558: INFO: namespace: e2e-tests-emptydir-k77zb, resource: bindings, ignored listing per whitelist
Dec 21 13:18:35.577: INFO: namespace e2e-tests-emptydir-k77zb deletion completed in 8.243265098s

• [SLOW TEST:24.362 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:18:35.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-64d5b13e-23f4-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume secrets
Dec 21 13:18:36.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-tz7qd" to be "success or failure"
Dec 21 13:18:36.072: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.771933ms
Dec 21 13:18:38.309: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260092178s
Dec 21 13:18:40.329: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280060036s
Dec 21 13:18:43.455: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.405984873s
Dec 21 13:18:45.483: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.433896882s
Dec 21 13:18:47.573: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.523723415s
STEP: Saw pod success
Dec 21 13:18:47.573: INFO: Pod "pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:18:47.587: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Dec 21 13:18:47.894: INFO: Waiting for pod pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:18:47.912: INFO: Pod pod-projected-secrets-64e6b18b-23f4-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:18:47.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tz7qd" for this suite.
Dec 21 13:18:54.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:18:54.150: INFO: namespace: e2e-tests-projected-tz7qd, resource: bindings, ignored listing per whitelist
Dec 21 13:18:54.202: INFO: namespace e2e-tests-projected-tz7qd deletion completed in 6.219845052s

• [SLOW TEST:18.624 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:18:54.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 21 13:18:54.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-h4gcr'
Dec 21 13:18:54.808: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 21 13:18:54.808: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 21 13:18:55.037: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-knk7r]
Dec 21 13:18:55.038: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-knk7r" in namespace "e2e-tests-kubectl-h4gcr" to be "running and ready"
Dec 21 13:18:55.044: INFO: Pod "e2e-test-nginx-rc-knk7r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.292017ms
Dec 21 13:18:57.060: INFO: Pod "e2e-test-nginx-rc-knk7r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021902883s
Dec 21 13:18:59.088: INFO: Pod "e2e-test-nginx-rc-knk7r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050302604s
Dec 21 13:19:02.051: INFO: Pod "e2e-test-nginx-rc-knk7r": Phase="Pending", Reason="", readiness=false. Elapsed: 7.013769622s
Dec 21 13:19:04.080: INFO: Pod "e2e-test-nginx-rc-knk7r": Phase="Pending", Reason="", readiness=false. Elapsed: 9.042122097s
Dec 21 13:19:06.101: INFO: Pod "e2e-test-nginx-rc-knk7r": Phase="Running", Reason="", readiness=true. Elapsed: 11.063246341s
Dec 21 13:19:06.101: INFO: Pod "e2e-test-nginx-rc-knk7r" satisfied condition "running and ready"
Dec 21 13:19:06.101: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-knk7r]
Dec 21 13:19:06.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-h4gcr'
Dec 21 13:19:06.340: INFO: stderr: ""
Dec 21 13:19:06.341: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Dec 21 13:19:06.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-h4gcr'
Dec 21 13:19:06.561: INFO: stderr: ""
Dec 21 13:19:06.561: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:19:06.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h4gcr" for this suite.
Dec 21 13:19:30.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:19:30.955: INFO: namespace: e2e-tests-kubectl-h4gcr, resource: bindings, ignored listing per whitelist
Dec 21 13:19:31.023: INFO: namespace e2e-tests-kubectl-h4gcr deletion completed in 24.377831617s

• [SLOW TEST:36.821 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:19:31.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Dec 21 13:19:31.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:31.838: INFO: stderr: ""
Dec 21 13:19:31.838: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 13:19:31.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:32.039: INFO: stderr: ""
Dec 21 13:19:32.039: INFO: stdout: "update-demo-nautilus-mvl6n update-demo-nautilus-vwlks "
Dec 21 13:19:32.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:32.225: INFO: stderr: ""
Dec 21 13:19:32.225: INFO: stdout: ""
Dec 21 13:19:32.226: INFO: update-demo-nautilus-mvl6n is created but not running
Dec 21 13:19:37.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:37.402: INFO: stderr: ""
Dec 21 13:19:37.402: INFO: stdout: "update-demo-nautilus-mvl6n update-demo-nautilus-vwlks "
Dec 21 13:19:37.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:37.518: INFO: stderr: ""
Dec 21 13:19:37.518: INFO: stdout: ""
Dec 21 13:19:37.518: INFO: update-demo-nautilus-mvl6n is created but not running
Dec 21 13:19:42.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:44.139: INFO: stderr: ""
Dec 21 13:19:44.139: INFO: stdout: "update-demo-nautilus-mvl6n update-demo-nautilus-vwlks "
Dec 21 13:19:44.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:44.509: INFO: stderr: ""
Dec 21 13:19:44.509: INFO: stdout: ""
Dec 21 13:19:44.509: INFO: update-demo-nautilus-mvl6n is created but not running
Dec 21 13:19:49.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:49.742: INFO: stderr: ""
Dec 21 13:19:49.742: INFO: stdout: "update-demo-nautilus-mvl6n update-demo-nautilus-vwlks "
Dec 21 13:19:49.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:49.912: INFO: stderr: ""
Dec 21 13:19:49.912: INFO: stdout: "true"
Dec 21 13:19:49.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:50.064: INFO: stderr: ""
Dec 21 13:19:50.064: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:19:50.064: INFO: validating pod update-demo-nautilus-mvl6n
Dec 21 13:19:50.075: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:19:50.075: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:19:50.075: INFO: update-demo-nautilus-mvl6n is verified up and running
Dec 21 13:19:50.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwlks -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:50.224: INFO: stderr: ""
Dec 21 13:19:50.224: INFO: stdout: "true"
Dec 21 13:19:50.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwlks -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:50.368: INFO: stderr: ""
Dec 21 13:19:50.368: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:19:50.368: INFO: validating pod update-demo-nautilus-vwlks
Dec 21 13:19:50.393: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:19:50.393: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:19:50.393: INFO: update-demo-nautilus-vwlks is verified up and running
STEP: scaling down the replication controller
Dec 21 13:19:50.397: INFO: scanned /root for discovery docs: 
Dec 21 13:19:50.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:51.818: INFO: stderr: ""
Dec 21 13:19:51.818: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 13:19:51.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:51.955: INFO: stderr: ""
Dec 21 13:19:51.955: INFO: stdout: "update-demo-nautilus-mvl6n update-demo-nautilus-vwlks "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 21 13:19:56.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:19:57.244: INFO: stderr: ""
Dec 21 13:19:57.244: INFO: stdout: "update-demo-nautilus-mvl6n update-demo-nautilus-vwlks "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 21 13:20:02.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:02.398: INFO: stderr: ""
Dec 21 13:20:02.399: INFO: stdout: "update-demo-nautilus-mvl6n "
Dec 21 13:20:02.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:02.686: INFO: stderr: ""
Dec 21 13:20:02.686: INFO: stdout: "true"
Dec 21 13:20:02.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:02.885: INFO: stderr: ""
Dec 21 13:20:02.885: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:20:02.885: INFO: validating pod update-demo-nautilus-mvl6n
Dec 21 13:20:02.922: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:20:02.923: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:20:02.923: INFO: update-demo-nautilus-mvl6n is verified up and running
STEP: scaling up the replication controller
Dec 21 13:20:02.928: INFO: scanned /root for discovery docs: 
Dec 21 13:20:02.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:04.317: INFO: stderr: ""
Dec 21 13:20:04.317: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 21 13:20:04.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:04.409: INFO: stderr: ""
Dec 21 13:20:04.409: INFO: stdout: "update-demo-nautilus-c9d2n update-demo-nautilus-mvl6n "
Dec 21 13:20:04.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9d2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:04.532: INFO: stderr: ""
Dec 21 13:20:04.532: INFO: stdout: ""
Dec 21 13:20:04.532: INFO: update-demo-nautilus-c9d2n is created but not running
Dec 21 13:20:09.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:09.806: INFO: stderr: ""
Dec 21 13:20:09.807: INFO: stdout: "update-demo-nautilus-c9d2n update-demo-nautilus-mvl6n "
Dec 21 13:20:09.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9d2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:10.365: INFO: stderr: ""
Dec 21 13:20:10.365: INFO: stdout: ""
Dec 21 13:20:10.365: INFO: update-demo-nautilus-c9d2n is created but not running
Dec 21 13:20:15.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:15.574: INFO: stderr: ""
Dec 21 13:20:15.574: INFO: stdout: "update-demo-nautilus-c9d2n update-demo-nautilus-mvl6n "
Dec 21 13:20:15.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9d2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:15.721: INFO: stderr: ""
Dec 21 13:20:15.721: INFO: stdout: "true"
Dec 21 13:20:15.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9d2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:15.822: INFO: stderr: ""
Dec 21 13:20:15.822: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:20:15.822: INFO: validating pod update-demo-nautilus-c9d2n
Dec 21 13:20:15.834: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:20:15.834: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:20:15.834: INFO: update-demo-nautilus-c9d2n is verified up and running
Dec 21 13:20:15.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:16.008: INFO: stderr: ""
Dec 21 13:20:16.008: INFO: stdout: "true"
Dec 21 13:20:16.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mvl6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:16.247: INFO: stderr: ""
Dec 21 13:20:16.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 21 13:20:16.247: INFO: validating pod update-demo-nautilus-mvl6n
Dec 21 13:20:16.262: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 21 13:20:16.262: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 21 13:20:16.262: INFO: update-demo-nautilus-mvl6n is verified up and running
STEP: using delete to clean up resources
Dec 21 13:20:16.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:16.401: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 21 13:20:16.401: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 21 13:20:16.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-spc4q'
Dec 21 13:20:16.603: INFO: stderr: "No resources found.\n"
Dec 21 13:20:16.603: INFO: stdout: ""
Dec 21 13:20:16.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-spc4q -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 21 13:20:16.742: INFO: stderr: ""
Dec 21 13:20:16.742: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:20:16.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-spc4q" for this suite.
Dec 21 13:20:42.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:20:42.879: INFO: namespace: e2e-tests-kubectl-spc4q, resource: bindings, ignored listing per whitelist
Dec 21 13:20:42.982: INFO: namespace e2e-tests-kubectl-spc4q deletion completed in 26.222525174s

• [SLOW TEST:71.958 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:20:42.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1221 13:20:53.747997       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 21 13:20:53.748: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:20:53.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-j4hg4" for this suite.
Dec 21 13:21:01.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:21:01.497: INFO: namespace: e2e-tests-gc-j4hg4, resource: bindings, ignored listing per whitelist
Dec 21 13:21:01.511: INFO: namespace e2e-tests-gc-j4hg4 deletion completed in 7.752863264s

• [SLOW TEST:18.529 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:21:01.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 21 13:21:01.792: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hdwb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hdwb7/configmaps/e2e-watch-test-resource-version,UID:bbc293cb-23f4-11ea-a994-fa163e34d433,ResourceVersion:15575342,Generation:0,CreationTimestamp:2019-12-21 13:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 21 13:21:01.793: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-hdwb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-hdwb7/configmaps/e2e-watch-test-resource-version,UID:bbc293cb-23f4-11ea-a994-fa163e34d433,ResourceVersion:15575343,Generation:0,CreationTimestamp:2019-12-21 13:21:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:21:01.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-hdwb7" for this suite.
Dec 21 13:21:07.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:21:08.221: INFO: namespace: e2e-tests-watch-hdwb7, resource: bindings, ignored listing per whitelist
Dec 21 13:21:08.235: INFO: namespace e2e-tests-watch-hdwb7 deletion completed in 6.35571117s

• [SLOW TEST:6.724 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Dec 21 13:21:08.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-bfc0ee9f-23f4-11ea-bbd3-0242ac110005
STEP: Creating a pod to test consume configMaps
Dec 21 13:21:08.469: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005" in namespace "e2e-tests-projected-46w2b" to be "success or failure"
Dec 21 13:21:08.524: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 55.716546ms
Dec 21 13:21:10.680: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211044406s
Dec 21 13:21:12.859: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390763579s
Dec 21 13:21:15.254: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.784881631s
Dec 21 13:21:17.270: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80098526s
Dec 21 13:21:19.316: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.847769877s
STEP: Saw pod success
Dec 21 13:21:19.317: INFO: Pod "pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005" satisfied condition "success or failure"
Dec 21 13:21:19.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 21 13:21:19.588: INFO: Waiting for pod pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005 to disappear
Dec 21 13:21:19.607: INFO: Pod pod-projected-configmaps-bfc1d52f-23f4-11ea-bbd3-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Dec 21 13:21:19.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-46w2b" for this suite.
Dec 21 13:21:26.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 21 13:21:26.332: INFO: namespace: e2e-tests-projected-46w2b, resource: bindings, ignored listing per whitelist
Dec 21 13:21:26.581: INFO: namespace e2e-tests-projected-46w2b deletion completed in 6.963273078s

• [SLOW TEST:18.345 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SDec 21 13:21:26.582: INFO: Running AfterSuite actions on all nodes
Dec 21 13:21:26.582: INFO: Running AfterSuite actions on node 1
Dec 21 13:21:26.582: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9218.616 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS